Science.gov

Sample records for accurate confidence intervals

  1. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  2. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  3. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  4. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Thompson, Bruce

    2007-01-01

    The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

  5. Confidence Trick: The Interpretation of Confidence Intervals

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    The frequent misinterpretation of the nature of confidence intervals by students has been well documented. This article examines the problem as an aspect of the learning of mathematical definitions and considers the tension between parroting mathematically rigorous, but essentially uninternalized, statements on the one hand and expressing…

  6. Teaching Confidence Intervals Using Simulation

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2008-01-01

    Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

  7. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  8. A primer on confidence intervals in psychopharmacology.

    PubMed

    Andrade, Chittaranjan

    2015-02-01

    Research papers and research summaries frequently present results in the form of data accompanied by 95% confidence intervals (CIs). Not all students and clinicians know how to interpret CIs. This article provides a nontechnical, nonmathematical discussion on how to understand and glean information from CIs; all explanations are accompanied by simple examples. A statistically accurate explanation about CIs is also provided. CIs are differentiated from standard deviations, standard errors, and confidence levels. The interpretation of narrow and wide CIs is discussed. Factors that influence the width of a CI are listed. Explanations are provided for how CIs can be used to assess statistical significance. The significance of overlapping and nonoverlapping CIs is considered. It is concluded that CIs are far more informative than, say, mere P values when drawing conclusions about a result.

  9. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    PubMed Central

    Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K.

    2016-01-01

    For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

  10. Constructing Confidence Intervals for Qtl Location

    PubMed Central

    Mangin, B.; Goffinet, B.; Rebai, A.

    1994-01-01

    We describe a method for constructing the confidence interval of the QTL location parameter. This method is developed in the local asymptotic framework, leading to a linear model at each position of the putative QTL. The idea is to construct a likelihood ratio test, using statistics whose asymptotic distribution does not depend on the nuisance parameters and in particular on the effect of the QTL. We show theoretical properties of the confidence interval built with this test, and compare it with the classical confidence interval using simulations. We show in particular, that our confidence interval has the correct probability of containing the true map location of the QTL, for almost all QTLs, whereas the classical confidence interval can be very biased for QTLs having small effect. PMID:7896108

  11. Confidence intervals for ATR performance metrics

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.

    2001-08-01

    This paper describes confidence interval (CI) estimators (CIEs) for the metrics used to assess sensor exploitation algorithm (or ATR) performance. For the discrete distributions, small sample sizes and extreme outcomes encountered within ATR testing, the commonly used CIEs have limited accuracy. This paper makes available CIEs that are accurate over all conditions of interest to the ATR community. The approach is to search for CIs using an integration of the Bayesian posterior (IBP) to measure alpha (chance of the CI not containing the true value). The CIEs provided include proportion estimates based on Binomial distributions and rate estimates based on Poisson distributions. One or two-sided CIs may be selected. For two-sided CIEs, either minimal length, balanced tail probabilities, or balanced width may be selected. The CIEs' accuracies are reported based on a Monte Carlo validated integration of the posterior probability distribution and compared to the Normal approximation and `exact' (Clopper- Pearson) methods. While the IBP methods are accurate throughout, the conventional methods may realize alphas with substantial error (up to 50%). This translates to 10 to 15% error in the CI widths or to requiring 10 to 15% more samples for a given confidence level.

  12. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  13. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  14. Computing confidence intervals for standardized regression coefficients.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2013-12-01

    With fixed predictors, the standard method (Cohen, Cohen, West, & Aiken, 2003, p. 86; Harris, 2001, p. 80; Hays, 1994, p. 709) for computing confidence intervals (CIs) for standardized regression coefficients fails to account for the sampling variability of the criterion standard deviation. With random predictors, this method also fails to account for the sampling variability of the predictor standard deviations. Nevertheless, under some conditions the standard method will produce CIs with accurate coverage rates. To delineate these conditions, we used a Monte Carlo simulation to compute empirical CI coverage rates in samples drawn from 36 populations with a wide range of data characteristics. We also computed the empirical CI coverage rates for 4 alternative methods that have been discussed in the literature: noncentrality interval estimation, the delta method, the percentile bootstrap, and the bias-corrected and accelerated bootstrap. Our results showed that for many data-parameter configurations--for example, sample size, predictor correlations, coefficient of determination (R²), orientation of β with respect to the eigenvectors of the predictor correlation matrix, RX--the standard method produced coverage rates that were close to their expected values. However, when population R² was large and when β approached the last eigenvector of RX, then the standard method coverage rates were frequently below the nominal rate (sometimes by a considerable amount). In these conditions, the delta method and the 2 bootstrap procedures were consistently accurate. Results using noncentrality interval estimation were inconsistent. In light of these findings, we recommend that researchers use the delta method to evaluate the sampling variability of standardized regression coefficients.

  15. Computation of confidence intervals for Poisson processes

    NASA Astrophysics Data System (ADS)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  16. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  17. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  18. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  19. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  20. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

  1. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  2. Calculating Confidence Intervals for Effect Sizes Using Noncentral Distributions.

    ERIC Educational Resources Information Center

    Norris, Deborah

    This paper provides a brief review of the concepts of confidence intervals, effect sizes, and central and noncentral distributions. The use of confidence intervals around effect sizes is discussed. A demonstration of the Exploratory Software for Confidence Intervals (G. Cuming and S. Finch, 2001; ESCI) is given to illustrate effect size confidence…

  3. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1996-01-01

    Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  4. Contrasting diversity values: statistical inferences based on overlapping confidence intervals.

    PubMed

    MacGregor-Fors, Ian; Payton, Mark E

    2013-01-01

    Ecologists often contrast diversity (species richness and abundances) using tests for comparing means or indices. However, many popular software applications do not support performing standard inferential statistics for estimates of species richness and/or density. In this study we simulated the behavior of asymmetric log-normal confidence intervals and determined an interval level that mimics statistical tests with P(α) = 0.05 when confidence intervals from two distributions do not overlap. Our results show that 84% confidence intervals robustly mimic 0.05 statistical tests for asymmetric confidence intervals, as has been demonstrated for symmetric ones in the past. Finally, we provide detailed user-guides for calculating 84% confidence intervals in two of the most robust and highly-used freeware related to diversity measurements for wildlife (i.e., EstimateS, Distance).

  5. Contrasting Diversity Values: Statistical Inferences Based on Overlapping Confidence Intervals

    PubMed Central

    MacGregor-Fors, Ian; Payton, Mark E.

    2013-01-01

    Ecologists often contrast diversity (species richness and abundances) using tests for comparing means or indices. However, many popular software applications do not support performing standard inferential statistics for estimates of species richness and/or density. In this study we simulated the behavior of asymmetric log-normal confidence intervals and determined an interval level that mimics statistical tests with P(α) = 0.05 when confidence intervals from two distributions do not overlap. Our results show that 84% confidence intervals robustly mimic 0.05 statistical tests for asymmetric confidence intervals, as has been demonstrated for symmetric ones in the past. Finally, we provide detailed user-guides for calculating 84% confidence intervals in two of the most robust and highly-used freeware related to diversity measurements for wildlife (i.e., EstimateS, Distance). PMID:23437239

  6. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  7. A Note on Confidence Interval Estimation and Margin of Error

    ERIC Educational Resources Information Center

    Gilliland, Dennis; Melfi, Vince

    2010-01-01

    Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

  8. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  9. Sample Size for the "Z" Test and Its Confidence Interval

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2012-01-01

    The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)

  10. Exact Confidence Intervals in the Presence of Interference

    PubMed Central

    Rigdon, Joseph; Hudgens, Michael G.

    2015-01-01

    For two-stage randomized experiments assuming partial interference, exact confidence intervals are proposed for treatment effects on a binary outcome. Empirical studies demonstrate the new intervals have narrower width than previously proposed exact intervals based on the Hoeffding inequality. PMID:26190877

  11. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  12. Estimation of confidence intervals for federal waterfowl harvest surveys

    USGS Publications Warehouse

    Geissler, P.H.

    1990-01-01

    I developed methods of estimating confidence intervals for the federal waterfowl harvest surveys conducted by the U.S. Fish and Wildlife Service (USFWS). I estimated flyway harvest confidence intervals for mallards (Anas platyrhynchos) (95% CI are .+-. 8% of the estimate). Canada geese (Branta canadensis) (.+-. 11%), black ducks (Anas rubripes) (.+-. 16%), canvasbacks (Aythya valisineria) (.+-. 32%), snow geese (Chen caerulescens) (.+-. 43%), and brant (Branta bernicla) (.+-. 46%). Differences between annual estimate of 10, 13, 22, 42, 43, and 58% could be detected with mallards, Canada geese, black ducks, canvasbacks, snow geese, and brant, respectively. Estimated confidence intervals for state harvests tended to be much larger than those for the flyway estimates.

  13. Inference by Eye: Pictures of Confidence Intervals and Thinking about Levels of Confidence

    ERIC Educational Resources Information Center

    Cumming, Geoff

    2007-01-01

    A picture of a 95% confidence interval (CI) implicitly contains pictures of CIs of all other levels of confidence, and information about the "p"-value for testing a null hypothesis. This article discusses pictures, taken from interactive software, that suggest several ways to think about the level of confidence of a CI, "p"-values, and what…

  14. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  15. Inference by eye: reading the overlap of independent confidence intervals.

    PubMed

    Cumming, Geoff

    2009-01-30

    When 95 per cent confidence intervals (CIs) on independent means do not overlap, the two-tailed p-value is less than 0.05 and there is a statistically significant difference between the means. However, p for non-overlapping 95 per cent CIs is actually considerably smaller than 0.05: If the two CIs just touch, p is about 0.01, and the intervals can overlap by as much as about half the length of one CI arm before p becomes as large as 0.05. Keeping in mind this rule-that overlap of half the length of one arm corresponds approximately to statistical significance at p = 0.05-can be helpful for a quick appreciation of figures that display CIs, especially if precise p-values are not reported. The author investigated the robustness of this and similar rules, and found them sufficiently accurate when sample sizes are at least 10, and the two intervals do not differ in width by more than a factor of 2. The author reviewed previous discussions of CI overlap and extended the investigation to p-values other than 0.05 and 0.01. He also studied 95 per cent CIs on two proportions, and on two Pearson correlations, and found similar rules apply to overlap of these asymmetric CIs, for a very broad range of cases. Wider use of figures with 95 per cent CIs is desirable, and these rules may assist easy and appropriate understanding of such figures.

  16. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  17. Confidence Intervals for Gamma-Family Measures of Ordinal Association

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2007-01-01

    This research focused on confidence intervals (CIs) for 10 measures of monotonic association between ordinal variables. Standard errors (SEs) were also reviewed because more than 1 formula was available per index. For 5 indices, an element of the formula used to compute an SE is given that is apparently new. CIs computed with different SEs were…

  18. Researchers Misunderstand Confidence Intervals and Standard Error Bars

    ERIC Educational Resources Information Center

    Belia, Sarah; Fidler, Fiona; Williams, Jennifer; Cumming, Geoff

    2005-01-01

    Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p…

  19. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  20. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  1. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population effect size…

  2. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  3. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  4. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input

  5. Selecting accurate statements from the cognitive interview using confidence ratings.

    PubMed

    Roberts, Wayne T; Higham, Philip A

    2002-03-01

    Participants viewed a videotape of a simulated murder, and their recall (and confidence) was tested 1 week later with the cognitive interview. Results indicated that (a) the subset of statements assigned high confidence was more accurate than the full set of statements; (b) the accuracy benefit was limited to information that forensic experts considered relevant to an investigation, whereas peripheral information showed the opposite pattern; (c) the confidence-accuracy relationship was higher for relevant than for peripheral information; (d) the focused-retrieval phase was associated with a greater proportion of peripheral and a lesser proportion of relevant information than the other phases; and (e) only about 50% of the relevant information was elicited, and most of this was elicited in Phase 1.

  6. Flood frequency analysis: Confidence interval estimation by test inversion bootstrapping

    NASA Astrophysics Data System (ADS)

    Schendel, Thomas; Thongwichian, Rossukon

    2015-09-01

    A common approach to estimate extreme flood events is the annual block maxima approach, where for each year the peak streamflow is determined and a distribution (usually the generalized extreme value distribution (GEV)) is fitted to this series of maxima. Eventually this distribution is used to estimate the return level for a defined return period. However, due to the finite sample size, the estimated return levels are associated with a range of uncertainity, usually expressed via confidence intervals. Previous publications have shown that existing bootstrapping methods for estimating the confidence intervals of the GEV yield too narrow estimates of these uncertainty ranges. Therefore, we present in this article a novel approach based on the less known test inversion bootstrapping, which we adapted especially for complex quantities like the return level. The reliability of this approach is studied and its performance is compared to other bootstrapping methods as well as the Profile Likelihood technique. It is shown that the new approach improves significantly the coverage of confidence intervals compared to other bootstrapping methods and for small sample sizes should even be favoured over the Profile Likelihood.

  7. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  8. The Naive Intuitive Statistician: A Naive Sampling Model of Intuitive Confidence Intervals

    ERIC Educational Resources Information Center

    Juslin, Peter; Winman, Anders; Hansson, Patrik

    2007-01-01

    The perspective of the naive intuitive statistician is outlined and applied to explain overconfidence when people produce intuitive confidence intervals and why this format leads to more overconfidence than other formally equivalent formats. The naive sampling model implies that people accurately describe the sample information they have but are…

  9. A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

    ERIC Educational Resources Information Center

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.

    2010-01-01

    The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…

  10. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  11. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  12. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

    2016-04-01

    The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

  13. Bootstrap standard error and confidence intervals for the correlations corrected for indirect range restriction.

    PubMed

    Li, Johnson Ching-Hong; Chan, Wai; Cui, Ying

    2011-11-01

    The standard Pearson correlation coefficient, r, is a biased estimator of the population correlation coefficient, ρ(XY) , when predictor X and criterion Y are indirectly range-restricted by a third variable Z (or S). Two correction algorithms, Thorndike's (1949) Case III, and Schmidt, Oh, and Le's (2006) Case IV, have been proposed to correct for the bias. However, to our knowledge, the two algorithms did not provide a procedure to estimate the associated standard error and confidence intervals. This paper suggests using the bootstrap procedure as an alternative. Two Monte Carlo simulations were conducted to systematically evaluate the empirical performance of the proposed bootstrap procedure. The results indicated that the bootstrap standard error and confidence intervals were generally accurate across simulation conditions (e.g., selection ratio, sample size). The proposed bootstrap procedure can provide a useful alternative for the estimation of the standard error and confidence intervals for the correlation corrected for indirect range restriction.

  14. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  15. On Efficient Confidence Intervals for the Log-Normal Mean

    NASA Astrophysics Data System (ADS)

    Chami, Peter; Antoine, Robin; Sahai, Ashok

    Data obtained in biomedical research is often skewed. Examples include the incubation period of diseases like HIV/AIDS and the survival times of cancer patients. Such data, especially when they are positive and skewed, is often modeled by the log-normal distribution. If this model holds, then the log transformation produces a normal distribution. We consider the problem of constructing confidence intervals for the mean of the log-normal distribution. Several methods for doing this are known, including at least one estimator that performed better than Coxxs method for small sample sizes. We also construct a modified version of Coxxs method. Using simulation, we show that, when the sample size exceeds 30, it leads to confidence intervals that have good overall properties and are better than Coxxs method. More precisely, the actual coverage probability of our method is closer to the nominal coverage probability than is the case with Coxxs method. In addition, the new method is computationally much simpler than other well-known methods.

  16. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes.

    PubMed

    Francisco-Fernández, Mario; Quintela-del-Río, Alejandro

    2016-01-01

    Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper.

  17. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes

    PubMed Central

    2016-01-01

    Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper. PMID:26828651

  18. Concept of a (1-. cap alpha. ) performance confidence interval

    SciTech Connect

    Leong, H.H.; Johnson, G.R.; Bechtel, T.N.

    1980-01-01

    A multi-input, single-output system is assumed to be represented by some model. The distribution functions of the input and the output variables are considered to be at least obtainable through experimental data. Associated with the computer response of the model corresponding to given inputs, a conditional pseudoresponse set is generated. This response can be constructed by means of the model by using the simulated pseudorandom input variates from a neighborhood defined by a preassigned probability allowance. A pair of such pseudoresponse values can then be computed by a procedure corresponding to a (1-..cap alpha..) probability for the conditional pseudoresponse set. The range defined by such a pair is called a (1-..cap alpha..) performance confidence interval with respect to the model. The application of this concept can allow comparison of the merit of two models describing the same system, or it can detect a system change when the current response is out of the performance interval with respect to the previously identified model. 6 figures.

  19. A comparison of several methods for the confidence intervals of negative binomial proportions

    NASA Astrophysics Data System (ADS)

    Thong, Alfred Lim Sheng; Shan, Fam Pei

    2015-12-01

    This study focuses on the comparison of the performances of several approaches in constructing confidence interval of negative binomial proportions (single negative binomial proportion and the difference between two negative binomial proportions). After that, the strengths and weaknesses of the approaches in constructing confidence interval of negative binomial proportions are figured out. Performances of the approaches will be accessed by comparing their coverage probabilities and average lengths of confidence intervals. For the comparison of the performances of the approaches in single negative binomial proportion, Wald confidence interval (WCI-I), Agresti confidence interval (ACI-I), Wilson's Score confidence interval (WSCI-I) and Jeffrey confidence interval (JCI-I) are used. WSCI-I is the better approach for single negative binomial proportion in term of the average length of confidence intervals and average coverage probability. While for the comparison of the performances of the approaches in the difference between two negative binomial proportions, Wald confidence interval (WCI-II), Agresti confidence interval (ACI-II), Newcombe's Score confidence interval (NSCI-II), Jeffrey confidence interval (JCI-II) and Yule confidence interval (YCI-II) are used. Under different situations, a better approach has been discussed and recommended. There will be different approach that performs better for the coverage probability.

  20. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  1. Behavior Detection using Confidence Intervals of Hidden Markov Models

    SciTech Connect

    Griffin, Christopher H

    2009-01-01

    Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.

  2. Exact and Best Confidence Intervals for the Ability Parameter of the Rasch Model.

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    1991-01-01

    Smallest exact confidence intervals for the ability parameter of the Rasch model are derived and compared to the traditional asymptotically valid intervals based on Fisher information. Tables of exact confidence intervals, termed Clopper-Pearson intervals, can be drawn up with a computer program developed by K. Klauer. (SLD)

  3. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual" (2001)…

  4. The use of latin hypercube sampling for the efficient estimation of confidence intervals

    SciTech Connect

    Grabaskas, D.; Denning, R.; Aldemir, T.; Nakayama, M. K.

    2012-07-01

    Latin hypercube sampling (LHS) has long been used as a way of assuring adequate sampling of the tails of distributions in a Monte Carlo analysis and provided the framework for the uncertainty analysis performed in the NUREG-1150 risk assessment. However, this technique has not often been used in the performance of regulatory analyses due to the inability to establish confidence levels on the quantiles of the output distribution. Recent work has demonstrated a method that makes this possible. This method is compared to the procedure of crude Monte Carlo using order statistics, which is currently used to establish confidence levels. The results of several statistical examples demonstrate that the LHS confidence interval method can provide a more accurate and precise solution, but issues remain when applying the technique generally. (authors)

  5. Using Confidence Intervals and Recurrence Intervals to Determine Precipitation Delivery Mechanisms Responsible for Mass Wasting Events.

    NASA Astrophysics Data System (ADS)

    Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.

    2015-12-01

    Mass wasting events are geologic hazards that impact human life and property across a variety of landscapes. These movements can be triggered by tectonic activity, anomalous precipitation events, or both; acting to decrease the factor of safety ratio on a hillslope to the point of failure. There exists an active hazard landscape in the West Boulder River drainage of Park Co., MT in which the mechanisms of slope failure are unknown. It is known that region has not seen significant tectonic activity within the last decade, leaving anomalous precipitation events as the likely trigger for slope failures in the landscape. Precipitation can be delivered to a landscape via rainfall or snow; it was the aim of this study to determine the precipitation delivery mechanism most likely responsible for movements in the West Boulder drainage following the Jungle Wildfire of 2006. Data was compiled from four SNOTEL sites in the surrounding area, spanning 33 years, focusing on, but not limited to; maximum snow water equivalent (SWE) values in a water year, median SWE values on the date which maximum SWE was recorded in a water year, the total precipitation accumulated in a water year, etc. Means were computed and 99% confidence intervals were constructed around these means. Recurrence intervals and exceedance probabilities were computed for maximum SWE values and total precipitation accumulated in a water year to determine water years with anomalous precipitation. It was determined that the water year 2010-2011 received an anomalously high amount of SWE, and snow melt in the spring of this water year likely triggered recent mass waste movements. This data is further supported by Google Earth imagery, showing movements between 2009 and 2011. Return intervals for the maximum SWE value in 2010-11 for the Placer Basin SNOTEL site was 34 years, while return intervals for the Box Canyon and Monument Peak SNOTEL sites were 17.5 and 17 years respectively. Max SWE values lie outside the

  6. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  7. Determination of confidence intervals in non-normal data: application of the bootstrap to cocaine concentration in femoral blood.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2015-03-01

    Calculating the confidence interval is a common procedure in data analysis and is readily obtained from normally distributed populations with the familiar [Formula: see text] formula. However, when working with non-normally distributed data, determining the confidence interval is not as obvious. For this type of data, there are fewer references in the literature, and they are much less accessible. We describe, in simple language, the percentile and bias-corrected and accelerated variations of the bootstrap method to calculate confidence intervals. This method can be applied to a wide variety of parameters (mean, median, slope of a calibration curve, etc.) and is appropriate for normal and non-normal data sets. As a worked example, the confidence interval around the median concentration of cocaine in femoral blood is calculated using bootstrap techniques. The median of the non-toxic concentrations was 46.7 ng/mL with a 95% confidence interval of 23.9-85.8 ng/mL in the non-normally distributed set of 45 postmortem cases. This method should be used to lead to more statistically sound and accurate confidence intervals for non-normally distributed populations, such as reference values of therapeutic and toxic drug concentration, as well as situations of truncated concentration values near the limit of quantification or cutoff of a method.

  8. Receiver operating characteristic analysis for intelligent medical systems--a new approach for finding confidence intervals.

    PubMed

    Tilbury, J B; Van Eetvelt, P W; Garibaldi, J M; Curnow, J S; Ifeachor, E C

    2000-07-01

    Intelligent systems are increasingly being deployed in medicine and healthcare, but there is a need for a robust and objective methodology for evaluating such systems. Potentially, receiver operating characteristic (ROC) analysis could form a basis for the objective evaluation of intelligent medical systems. However, it has several weaknesses when applied to the types of data used to evaluate intelligent medical systems. First, small data sets are often used, which are unsatisfactory with existing methods. Second, many existing ROC methods use parametric assumptions which may not always be valid for the test cases selected. Third, system evaluations are often more concerned with particular, clinically meaningful, points on the curve, rather than on global indexes such as the more commonly used area under the curve. A novel, robust and accurate method is proposed, derived from first principles, which calculates the probability density function (pdf) for each point on a ROC curve for any given sample size. Confidence intervals are produced as contours on the pdf. The theoretical work has been validated by Monte Carlo simulations. It has also been applied to two real-world examples of ROC analysis, taken from the literature (classification of mammograms and differential diagnosis of pancreatic diseases), to investigate the confidence surfaces produced for real cases, and to illustrate how analysis of system performance can be enhanced. We illustrate the impact of sample size on system performance from analysis of ROC pdf's and 95% confidence boundaries. This work establishes an important new method for generating pdf's, and provides an accurate and robust method of producing confidence intervals for ROC curves for the small sample sizes typical of intelligent medical systems. It is conjectured that, potentially, the method could be extended to determine risks associated with the deployment of intelligent medical systems in clinical practice.

  9. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  10. Safety evaluation and confidence intervals when the number of observed events is small or zero.

    PubMed

    Jovanovic, B D; Zalenski, R J

    1997-09-01

    A common objective in many clinical studies is to determine the safety of a diagnostic test or therapeutic intervention. In these evaluations, serious adverse effects are either rare or not encountered. In this setting, the estimation of the confidence interval (CI) for the unknown proportion of adverse events has special importance. When no adverse events are encountered, commonly used approximate methods for calculating CIs cannot be applied, and such information is not commonly reported. Furthermore, when only a few adverse events are encountered, the approximate methods for calculation of CIs can be applied, but are neither appropriate nor accurate. In both situations, CIs should be computed with the use of the exact binomial distribution. We discuss the need for such estimation and provide correct methods and rules of thumb for quick computations of accurate approximations of the 95% and 99.9% CIs when the observed number of adverse events is zero. PMID:9287891

  11. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  12. What Confidence Intervals "Really" Do and Why They Are So Important for Middle Grades Educational Research

    ERIC Educational Resources Information Center

    Skidmore, Susan Troncoso

    2009-01-01

    Recommendations made by major educational and psychological organizations (American Educational Research Association, 2006; American Psychological Association, 2001) call for researchers to regularly report confidence intervals. The purpose of the present paper is to provide support for the use of confidence intervals. To contextualize this…

  13. Computation of Confidence Intervals for Growth Performance in Determination of Safe Harbor Eligibility

    ERIC Educational Resources Information Center

    Mulvenon, Sean W.; Stegman, Charles E.

    2006-01-01

    As part of No Child Left Behind (NCLB) legislation, many states are using confidence intervals to determine a range of scores for evaluating a school system. More specifically, the states are employing confidence intervals to help minimize measurement error in determining a school system's performance. The methodology and techniques employed in…

  14. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  15. "Confidence Intervals for Gamma-family Measures of Ordinal Association": Correction

    ERIC Educational Resources Information Center

    Psychological Methods, 2008

    2008-01-01

    Reports an error in "Confidence intervals for gamma-family measures of ordinal association" by Carol M. Woods (Psychological Methods, 2007[Jun], Vol 12[2], 185-204). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's r-sub(s). An error in the author's C++ code…

  16. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students'…

  17. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10, the results…

  18. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  19. Multiplicative scale uncertainties in the unified approach for constructing confidence intervals

    SciTech Connect

    Smith, Elton

    2009-01-01

    We have investigated how uncertainties in the estimation of the detection efficiency affect the 90\\% confidence intervals in the unified approach for constructing confidence intervals. The study has been conducted for experiments where the number of detected events is large and can be described by a Gaussian probability density function. We also assume the detection efficiency has a Gaussian probability density and study the range of the relative uncertainties $\\sigma_\\epsilon$ between 0 and 30\\%. We find that the confidence intervals provide proper coverage and increase smoothly and continuously from the intervals that ignore scale uncertainties with a quadratic dependence on $\\sigma_\\epsilon$.

  20. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

    DOE PAGES

    Verbeke, Jerome M.

    2016-03-09

    From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

  1. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  2. Analytic Monte Carlo score distributions for future statistical confidence interval studies

    SciTech Connect

    Booth, T.E. )

    1992-10-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large score sampling from the score distribution's tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. The analytic score distribution for geometry splitting/Russian roulette applied to a simple Monte Carlo problem and the analytic score distribution for the exponential transform applied to the same Monte Carlo problem are provided in this paper.

  3. Improved confidence intervals for the linkage disequilibrium method for estimating effective population size.

    PubMed

    Jones, A T; Ovenden, J R; Wang, Y-G

    2016-10-01

    The linkage disequilibrium method is currently the most widely used single sample estimator of genetic effective population size. The commonly used software packages come with two options, referred to as the parametric and jackknife methods, for computing the associated confidence intervals. However, little is known on the coverage performance of these methods, and the published data suggest there may be some room for improvement. Here, we propose two new methods for generating confidence intervals and compare them with the two in current use through a simulation study. The new confidence interval methods tend to be conservative but outperform the existing methods for generating confidence intervals under certain circumstances, such as those that may be encountered when making estimates using large numbers of single-nucleotide polymorphisms.

  4. Improved confidence intervals for the linkage disequilibrium method for estimating effective population size.

    PubMed

    Jones, A T; Ovenden, J R; Wang, Y-G

    2016-10-01

    The linkage disequilibrium method is currently the most widely used single sample estimator of genetic effective population size. The commonly used software packages come with two options, referred to as the parametric and jackknife methods, for computing the associated confidence intervals. However, little is known on the coverage performance of these methods, and the published data suggest there may be some room for improvement. Here, we propose two new methods for generating confidence intervals and compare them with the two in current use through a simulation study. The new confidence interval methods tend to be conservative but outperform the existing methods for generating confidence intervals under certain circumstances, such as those that may be encountered when making estimates using large numbers of single-nucleotide polymorphisms. PMID:27005004

  5. Confidence Intervals for True Scores under an Answer-until-Correct Scoring Procedure.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1987-01-01

    Four procedures are discussed for obtaining a confidence interval when answer-until-correct scoring is used in multiple choice tests. Simulated data show that the choice of procedure depends upon sample size. (GDC)

  6. Approximate Confidence Interval for Difference of Fit in Structural Equation Models.

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2001-01-01

    Discusses a method, based on bootstrap methodology, for obtaining an approximate confidence interval for the difference in root mean square error of approximation of two structural equation models. Illustrates the method using a numerical example. (SLD)

  7. Bayesian methods of confidence interval construction for the population attributable risk from cross-sectional studies.

    PubMed

    Pirikahu, Sarah; Jones, Geoffrey; Hazelton, Martin L; Heuer, Cord

    2016-08-15

    Population attributable risk measures the public health impact of the removal of a risk factor. To apply this concept to epidemiological data, the calculation of a confidence interval to quantify the uncertainty in the estimate is desirable. However, because perhaps of the confusion surrounding the attributable risk measures, there is no standard confidence interval or variance formula given in the literature. In this paper, we implement a fully Bayesian approach to confidence interval construction of the population attributable risk for cross-sectional studies. We show that, in comparison with a number of standard Frequentist methods for constructing confidence intervals (i.e. delta, jackknife and bootstrap methods), the Bayesian approach is superior in terms of percent coverage in all except a few cases. This paper also explores the effect of the chosen prior on the coverage and provides alternatives for particular situations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26799685

  8. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  9. Bias-corrected confidence intervals for the concentration parameter in a dilution assay.

    PubMed

    Wang, J; Basu, S

    1999-03-01

    Interval estimates of the concentration of target entities from a serial dilution assay are usually based on the maximum likelihood estimator. The distribution of the maximum likelihood estimator is skewed to the right and is positively biased. This bias results in interval estimates that either provide inadequate coverage relative to the nominal level or yield excessively long intervals. Confidence intervals based on both log transformation and bias reduction are proposed and are shown through simulations to provide appropriate coverage with shorter widths than the commonly used intervals in a variety of designs. An application to feline AIDS research, which motivated this work, is also presented.

  10. Confidence intervals for a random-effects meta-analysis based on Bartlett-type corrections.

    PubMed

    Noma, Hisashi

    2011-12-10

    In medical meta-analysis, the DerSimonian-Laird confidence interval for the average treatment effect has been widely adopted in practice. However, it is well known that its coverage probability (the probability that the interval actually includes the true value) can be substantially below the target level. One particular reason is that the validity of the confidence interval depends on the assumption that the number of synthesized studies is sufficiently large. In typical medical meta-analyses, the number of studies is fewer than 20. In this article, we developed three confidence intervals for improving coverage properties, based on (i) the Bartlett corrected likelihood ratio statistic, (ii) the efficient score statistic, and (iii) the Bartlett-type adjusted efficient score statistic. The Bartlett and Bartlett-type corrections improve the large sample approximations for the likelihood ratio and efficient score statistics. Through numerical evaluations by simulations, these confidence intervals demonstrated better coverage properties than the existing methods. In particular, with a moderate number of synthesized studies, the Bartlett and Bartlett-type corrected confidence intervals performed well. An application to a meta-analysis of the treatment for myocardial infarction with intravenous magnesium is presented.

  11. Confidence Intervals For Maximized Alpha Coefficients: An Evaluation of Joe and Woodward's Procedures and an Alternative Method.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1980-01-01

    The procedures yielding confidence intervals for maximized alpha coefficients of Joe and Woodward are reviewed. Confidence interval procedures of Whalen and Masson are next reviewed. Results are then presented of a Monte Carlo investigation of the procedures. (Author/CTM)

  12. Bootstrap confidence intervals for the mean correlation corrected for Case IV range restriction: a more adequate procedure for meta-analysis.

    PubMed

    Li, Johnson Ching-Hong; Cui, Ying; Chan, Wai

    2013-01-01

    In this study, we proposed to use the nonparametric bootstrap procedure to construct the confidence interval for the mean correlation r corrected for Case IV range restriction in meta-analysis (i.e., ; Hunter, Schmidt, & Le, 2006). A comprehensive Monte Carlo study was conducted to evaluate the accuracy of the parametric confidence interval and 3 nonparametric bootstrap confidence intervals for r(c4). Of the 4 intervals, our results showed that the bootstrap bias-corrected and accelerated percentile interval (BCaI) yielded the most accurate results across different data situations. In addition, the mean-corrected correlation r(c4) was found to be more accurate than the uncorrected estimate. Implications of the mean-corrected correlation r(c4) and BCaI in organizational studies are also discussed.

  13. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  14. Confidence intervals for the selected population in randomized trials that adapt the population enrolled

    PubMed Central

    Rosenblum, Michael

    2014-01-01

    It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence during the trial that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect in the selected population. Confidence interval methods that fail to account for the adaptive nature of the design may fail to have the desired coverage probability. We provide a new procedure for constructing confidence intervals having at least 95% coverage probability, uniformly over a large class Q of possible data generating distributions. Our method involves computing the minimum factor c by which a standard confidence interval must be expanded in order to have, asymptotically, at least 95% coverage probability, uniformly over Q. Computing the expansion factor c is not trivial, since it is not a priori clear, for a given decision rule, which data generating distribution leads to the worst-case coverage probability. We give an algorithm that computes c, and prove an optimality property for the resulting confidence interval procedure. PMID:23553577

  15. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management. PMID:24465792

  16. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  17. An Investigation of Quantile Function Estimators Relative to Quantile Confidence Interval Coverage

    PubMed Central

    Wei, Lai; Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    In this article, we investigate the limitations of traditional quantile function estimators and introduce a new class of quantile function estimators, namely, the semi-parametric tail-extrapolated quantile estimators, which has excellent performance for estimating the extreme tails with finite sample sizes. The smoothed bootstrap and direct density estimation via the characteristic function methods are developed for the estimation of confidence intervals. Through a comprehensive simulation study to compare the confidence interval estimations of various quantile estimators, we discuss the preferred quantile estimator in conjunction with the confidence interval estimation method to use under different circumstances. Data examples are given to illustrate the superiority of the semi-parametric tail-extrapolated quantile estimators. The new class of quantile estimators is obtained by slight modification of traditional quantile estimators, and therefore, should be specifically appealing to researchers in estimating the extreme tails. PMID:26924881

  18. Effective confidence interval estimation of fault-detection process of software reliability growth models

    NASA Astrophysics Data System (ADS)

    Fang, Chih-Chiang; Yeh, Chun-Wu

    2016-09-01

    The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

  19. Estimation and interpretation of k{sub eff} confidence intervals in MCNP

    SciTech Connect

    Urbatsch, T.J.; Forster, R.A.; Prael, R.E.; Beckman, R.J.

    1995-11-01

    MCNP`s criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP`s three k{sub eff} estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k{sub eff} estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered.

  20. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics.

    PubMed

    Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R

    2011-08-15

    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.

  1. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

    2010-01-01

    The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

  2. Making Subjective Judgments in Quantitative Studies: The Importance of Using Effect Sizes and Confidence Intervals

    ERIC Educational Resources Information Center

    Callahan, Jamie L.; Reio, Thomas G., Jr.

    2006-01-01

    At least twenty-three journals in the social sciences purportedly require authors to report effect sizes and, to a much lesser extent, confidence intervals; yet these requirements are rarely clear in the information for contributors. This article reviews some of the literature criticizing the exclusive use of null hypothesis significance testing…

  3. Spacecraft utility and the development of confidence intervals for criticality of anomalies

    NASA Technical Reports Server (NTRS)

    Williams, R. E.

    1980-01-01

    The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.

  4. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  5. Sample Size Planning for the Standardized Mean Difference: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Rausch, Joseph R.

    2006-01-01

    Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no…

  6. Applying a Score Confidence Interval to Aiken's Item Content-Relevance Index

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Giacobbi, Peter R., Jr

    2004-01-01

    Item content-relevance is an important consideration for researchers when developing scales used to measure psychological constructs. Aiken (1980) proposed a statistic, "V," that can be used to summarize item content-relevance ratings obtained from a panel of expert judges. This article proposes the application of the Score confidence interval to…

  7. Multivariate Effect Size Estimation: Confidence Interval Construction via Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling method is outlined for constructing a confidence interval (CI) of a popular multivariate effect size measure. The procedure uses the conventional multivariate analysis of variance (MANOVA) setup and is applicable with large samples. The approach provides a population range of plausible values for the proportion of…

  8. Assessing Conformance with Benford's Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals.

    PubMed

    Lesperance, M; Reed, W J; Stephens, M A; Tsao, C; Wilton, B

    2016-01-01

    Benford's Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford's Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford's Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud(2), Pearson's chi-square statistic and 100(1 - α)% Goodman's simultaneous confidence intervals be computed when assessing conformance with Benford's Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size.

  9. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  10. Confidence Intervals: Evaluating and Facilitating Their Use in Health Education Research

    ERIC Educational Resources Information Center

    Zhang, Jing; Hanik, Bruce W.; Chaney, Beth H.

    2008-01-01

    Health education researchers have called for research articles in health education to adhere to the recommendations of American Psychological Association and the American Medical Association regarding the reporting and use of effect sizes and confidence intervals (CIs). This article expands on the recommendations by (a) providing an overview of…

  11. Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.

    2007-01-01

    The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…

  12. Confidence intervals for confirmatory adaptive two-stage designs with treatment selection.

    PubMed

    Bebu, Ionut; Dragalin, Vladimir; Luta, George

    2013-05-01

    The construction of adequate confidence intervals for adaptive two-stage designs remains an area of ongoing research. We propose a conditional likelihood-based approach to construct a Wald confidence interval and two confidence intervals based on inverting the likelihood ratio test, one of them using first-order inference methods and the second one using higher order inference methods. The coverage probabilities of these confidence intervals, and also the average bias and mean square error of the corresponding point estimates, compare favorably with other available techniques. A small simulation study is used to evaluate the performance of the new methods. We investigate other extensions of practical interest for normal endpoints and illustrate them using real data, including the selection of more than one treatment for the second stage, selection rules based on both efficacy and safety endpoints, and the inclusion of a control/placebo arm. The new method also allows adjustment for covariates, and has been extended to deal with binomial data and other distributions from the exponential family. Although conceptually simple, the new methods have a much wider scope than the methods currently available.

  13. Conceptual and Practical Implications for Rehabilitation Research: Effect Size Estimates, Confidence Intervals, and Power

    ERIC Educational Resources Information Center

    Ferrin, James M.; Bishop, Malachy; Tansey, Timothy N.; Frain, Michael; Swett, Elizabeth A.; Lane, Frank J.

    2007-01-01

    For a number of conceptually and practically important reasons, reporting of effect size estimates, confidence intervals, and power in parameter estimation is increasingly being recognized as the preferred approach in social science research. Unfortunately, this practice has not yet been widely adopted in the rehabilitation or general counseling…

  14. Comparison of Approaches to Constructing Confidence Intervals for Mediating Effects Using Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. L.

    2007-01-01

    Mediators are variables that explain the association between an independent variable and a dependent variable. Structural equation modeling (SEM) is widely used to test models with mediating effects. This article illustrates how to construct confidence intervals (CIs) of the mediating effects for a variety of models in SEM. Specifically, mediating…

  15. Confidence Intervals for an Effect Size Measure in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2007-01-01

    The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…

  16. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  17. A Note on Confidence Intervals for Two-Group Latent Mean Effect Size Measures

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Fan, Weihua; Hancock, Gregory R.

    2009-01-01

    This note suggests delta method implementations for deriving confidence intervals for a latent mean effect size measure for the case of 2 independent populations. A hypothetical kindergarten reading example using these implementations is provided, as is supporting LISREL syntax. (Contains 1 table.)

  18. Note on a Confidence Interval for the Squared Semipartial Correlation Coefficient

    ERIC Educational Resources Information Center

    Algina, James; Keselman, Harvey J.; Penfield, Randall J.

    2008-01-01

    A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…

  19. Approximate Confidence Intervals for Estimates of Redundancy between Sets of Variables.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1989-01-01

    Bootstrap methodology is presented that yields approximations of the sampling variation of redundancy estimates while assuming little a priori knowledge about the distributions of these statistics. Results of numerical demonstrations suggest that bootstrap confidence intervals may offer substantial assistance in interpreting the results of…

  20. Assessing Conformance with Benford’s Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals

    PubMed Central

    Lesperance, M.; Reed, W. J.; Stephens, M. A.; Tsao, C.; Wilton, B.

    2016-01-01

    Benford’s Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford’s Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford’s Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud2, Pearson’s chi-square statistic and 100(1 − α)% Goodman’s simultaneous confidence intervals be computed when assessing conformance with Benford’s Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size. PMID:27018999

  1. Finite sample pointwise confidence intervals for a survival distribution with right-censored data.

    PubMed

    Fay, Michael P; Brittain, Erica H

    2016-07-20

    We review and develop pointwise confidence intervals for a survival distribution with right-censored data for small samples, assuming only independence of censoring and survival. When there is no censoring, at each fixed time point, the problem reduces to making inferences about a binomial parameter. In this case, the recently developed beta product confidence procedure (BPCP) gives the standard exact central binomial confidence intervals of Clopper and Pearson. Additionally, the BPCP has been shown to be exact (gives guaranteed coverage at the nominal level) for progressive type II censoring and has been shown by simulation to be exact for general independent right censoring. In this paper, we modify the BPCP to create a 'mid-p' version, which reduces to the mid-p confidence interval for a binomial parameter when there is no censoring. We perform extensive simulations on both the standard and mid-p BPCP using a method of moments implementation that enforces monotonicity over time. All simulated scenarios suggest that the standard BPCP is exact. The mid-p BPCP, like other mid-p confidence intervals, has simulated coverage closer to the nominal level but may not be exact for all survival times, especially in very low censoring scenarios. In contrast, the two asymptotically-based approximations have lower than nominal coverage in many scenarios. This poor coverage is due to the extreme inflation of the lower error rates, although the upper limits are very conservative. Both the standard and the mid-p BPCP methods are available in our bpcp R package. Published 2016. This article is US Government work and is in the public domain in the USA. PMID:26891706

  2. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  3. MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis.

    PubMed

    Campbell, Jamie I D; Thompson, Valerie A

    2012-12-01

    MorePower 6.0 is a flexible freeware statistical calculator that computes sample size, effect size, and power statistics for factorial ANOVA designs. It also calculates relational confidence intervals for ANOVA effects based on formulas from Jarmasz and Hollands (Canadian Journal of Experimental Psychology 63:124-138, 2009), as well as Bayesian posterior probabilities for the null and alternative hypotheses based on formulas in Masson (Behavior Research Methods 43:679-690, 2011). The program is unique in affording direct comparison of these three approaches to the interpretation of ANOVA tests. Its high numerical precision and ability to work with complex ANOVA designs could facilitate researchers' attention to issues of statistical power, Bayesian analysis, and the use of confidence intervals for data interpretation. MorePower 6.0 is available at https://wiki.usask.ca/pages/viewpageattachments.action?pageId=420413544 .

  4. The naïve intuitive statistician: a naïve sampling model of intuitive confidence intervals.

    PubMed

    Juslin, Peter; Winman, Anders; Hansson, Patrik

    2007-07-01

    The perspective of the naïve intuitive statistician is outlined and applied to explain overconfidence when people produce intuitive confidence intervals and why this format leads to more overconfidence than other formally equivalent formats. The naïve sampling model implies that people accurately describe the sample information they have but are naïve in the sense that they uncritically take sample properties as estimates of population properties. A review demonstrates that the naïve sampling model accounts for the robust and important findings in previous research as well as provides novel predictions that are confirmed, including a way to minimize the overconfidence with interval production. The authors discuss the naïve sampling model as a representative of models inspired by the naïve intuitive statistician. PMID:17638502

  5. Confidence Interval Methods for Coefficient Alpha on the Basis of Discrete, Ordinal Response Items: Which One, If Any, Is the Best?

    ERIC Educational Resources Information Center

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M.

    2011-01-01

    In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…

  6. On Statistical Methods for Common Mean and Reference Confidence Intervals in Interlaboratory Comparisons for Temperature

    NASA Astrophysics Data System (ADS)

    Witkovský, Viktor; Wimmer, Gejza; Ďuriš, Stanislav

    2015-08-01

    We consider a problem of constructing the exact and/or approximate coverage intervals for the common mean of several independent distributions. In a metrological context, this problem is closely related to evaluation of the interlaboratory comparison experiments, and in particular, to determination of the reference value (estimate) of a measurand and its uncertainty, or alternatively, to determination of the coverage interval for a measurand at a given level of confidence, based on such comparison data. We present a brief overview of some specific statistical models, methods, and algorithms useful for determination of the common mean and its uncertainty, or alternatively, the proper interval estimator. We illustrate their applicability by a simple simulation study and also by example of interlaboratory comparisons for temperature. In particular, we shall consider methods based on (i) the heteroscedastic common mean fixed effect model, assuming negligible laboratory biases, (ii) the heteroscedastic common mean random effects model with common (unknown) distribution of the laboratory biases, and (iii) the heteroscedastic common mean random effects model with possibly different (known) distributions of the laboratory biases. Finally, we consider a method, recently suggested by Singh et al., for determination of the interval estimator for a common mean based on combining information from independent sources through confidence distributions.

  7. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

    NASA Astrophysics Data System (ADS)

    Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

    2014-05-01

    Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

  8. Approximate Confidence Intervals for Standardized Effect Sizes in the Two-Independent and Two-Dependent Samples Design

    ERIC Educational Resources Information Center

    Viechtbauer, Wolfgang

    2007-01-01

    Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…

  9. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  10. Accuracy in Parameter Estimation for Targeted Effects in Structural Equation Modeling: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Lai, Keke; Kelley, Ken

    2011-01-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…

  11. Students' Conceptual Metaphors Influence Their Statistical Reasoning about Confidence Intervals. WCER Working Paper No. 2008-5

    ERIC Educational Resources Information Center

    Grant, Timothy S.; Nathan, Mitchell J.

    2008-01-01

    Confidence intervals are beginning to play an increasing role in the reporting of research findings within the social and behavioral sciences and, consequently, are becoming more prevalent in beginning classes in statistics and research methods. Confidence intervals are an attractive means of conveying experimental results, as they contain a…

  12. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  13. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu, L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2006-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Ovenvrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter error are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  14. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2007-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  15. A Statistical Method for Assessing Peptide Identification Confidence in Accurate Mass and Time Tag Proteomics

    SciTech Connect

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-07-15

    High-throughput proteomics is rapidly evolving to require high mass measurement accuracy for a variety of different applications. Increased mass measurement accuracy in bottom-up proteomics specifically allows for an improved ability to distinguish and characterize detected MS features, which may in turn be identified by, e.g., matching to entries in a database for both precursor and fragmentation mass identification methods. Many tools exist with which to score the identification of peptides from LC-MS/MS measurements or to assess matches to an accurate mass and time (AMT) tag database, but these two calculations remain distinctly unrelated. Here we present a statistical method, Statistical Tools for AMT tag Confidence (STAC), which extends our previous work incorporating prior probabilities of correct sequence identification from LC-MS/MS, as well as the quality with which LC-MS features match AMT tags, to evaluate peptide identification confidence. Compared to existing tools, we are able to obtain significantly more high-confidence peptide identifications at a given false discovery rate and additionally assign confidence estimates to individual peptide identifications. Freely available software implementations of STAC are available in both command line and as a Windows graphical application.

  16. Amplitude estimation of a sine function based on confidence intervals and Bayes' theorem

    NASA Astrophysics Data System (ADS)

    Eversmann, D.; Pretz, J.; Rosenthal, M.

    2016-05-01

    This paper discusses the amplitude estimation using data originating from a sine-like function as probability density function. If a simple least squares fit is used, a significant bias is observed if the amplitude is small compared to its error. It is shown that a proper treatment using the Feldman-Cousins algorithm of likelihood ratios allows one to construct improved confidence intervals. Using Bayes' theorem a probability density function is derived for the amplitude. It is used in an application to show that it leads to better estimates compared to a simple least squares fit.

  17. Assessment of individual agreements with repeated measurements based on generalized confidence intervals.

    PubMed

    Quiroz, Jorge; Burdick, Richard K

    2009-01-01

    Individual agreement between two measurement systems is determined using the total deviation index (TDI) or the coverage probability (CP) criteria as proposed by Lin (2000) and Lin et al. (2002). We used a variance component model as proposed by Choudhary (2007). Using the bootstrap approach, Choudhary (2007), and generalized confidence intervals, we construct bounds on TDI and CP. A simulation study was conducted to assess whether the bounds maintain the stated type I error probability of the test. We also present a computational example to demonstrate the statistical methods described in the paper.

  18. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman

  19. Fast time-series prediction using high-dimensional data: Evaluating confidence interval credibility

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito

    2014-05-01

    I propose an index for evaluating the credibility of confidence intervals for future observables predicted from high-dimensional time-series data. The index evaluates the distance from the current state to the data manifold. I demonstrate the index with artificial datasets generated from the Lorenz'96 II model [Lorenz, in Proceedings of the Seminar on Predictability, Vol. 1 (ECMWF, Reading, UK, 1996), p. 1], the Lorenz'96 I model [Hansen and Smith, J. Atmos. Sci. 57, 2859 (2000), 10.1175/1520-0469(2000)057<2859:TROOCI>2.0.CO;2], and the coupled map lattice, and a real dataset for the solar irradiation around Japan.

  20. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  1. Empirical likelihood-based confidence intervals for length-biased data

    PubMed Central

    Ning, J.; Qin, J.; Asgharian, M.; Shen, Y.

    2013-01-01

    Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross-sectional prevalent cohort study for which we recruit prevalent cases, i.e. subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow-up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so-called stationarity assumption, this bias is called length bias. The current literature on length-biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap using the empirical likelihood-based confidence intervals by adapting this method to right-censored length-biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. PMID:23027662

  2. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.

    PubMed

    Greenland, Sander; Senn, Stephen J; Rothman, Kenneth J; Carlin, John B; Poole, Charles; Goodman, Steven N; Altman, Douglas G

    2016-04-01

    Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting. PMID:27209009

  3. A comparison of confidence interval methods for the intraclass correlation coefficient in cluster randomized trials.

    PubMed

    Ukoumunne, Obioha C

    2002-12-30

    This study compared different methods for assigning confidence intervals to the analysis of variance estimator of the intraclass correlation coefficient (rho). The context of the comparison was the use of rho to estimate the variance inflation factor when planning cluster randomized trials. The methods were compared using Monte Carlo simulations of unbalanced clustered data and data from a cluster randomized trial of an intervention to improve the management of asthma in a general practice setting. The coverage and precision of the intervals were compared for data with different numbers of clusters, mean numbers of subjects per cluster and underlying values of rho. The performance of the methods was also compared for data with Normal and non-Normally distributed cluster specific effects. Results of the simulations showed that methods based upon the variance ratio statistic provided greater coverage levels than those based upon large sample approximations to the standard error of rho. Searle's method provided close to nominal coverage for data with Normally distributed random effects. Adjusted versions of Searle's method to allow for lack of balance in the data generally did not improve upon it either in terms of coverage or precision. Analyses of the trial data, however, showed that limits provided by Thomas and Hultquist's method may differ from those of the other variance ratio statistic methods when the arithmetic mean differs markedly from the harmonic mean cluster size. The simulation results demonstrated that marked non-Normality in the cluster level random effects compromised the performance of all methods. Confidence intervals for the methods were generally wide relative to the underlying size of rho suggesting that there may be great uncertainty associated with sample size calculations for cluster trials where large clusters are randomized. Data from cluster based studies with sample sizes much larger than those typical of cluster randomized trials are

  4. Optimal Averaging of Seasonal Sea Surface Temperatures and Associated Confidence Intervals (1860-1989).

    NASA Astrophysics Data System (ADS)

    Smith, Thomas M.; Reynolds, Richard W.; Ropelewski, Chester F.

    1994-06-01

    Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. The ability to assign confidence intervals is the main advantage of this method. Confidence intervals reflect how densely and uniformly an area is sampled during the averaging season. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest uncertainties. Analysis of OA-based uncertainty estimates shows that before 1930 sampling in the Southern Hemisphere was as good as it was in the Northern Hemisphere. From about 1930 to 1950, uncertainties decreased in both hemispheres, but the magnitude of the Northern Hemisphere uncertainties reduced more and remained smaller. After the early 1950s uncertainties were relatively constant in both hemispheres, indicating that sampling was relatively consistent over the period. During the two world wars, increased uncertainties reflected the sampling decreases over all the oceans, with the biggest decreases south of 40°S. The OA global SST anomalies are virtually identical to estimates of global SST anomalies computed using simpler methods, when the same data corrections are applied. When data are plentiful over an area there is no clear advantage of the OA over simpler methods. The major advantage of the OA over the simpler methods is the accompanying error estimates.The OA analysis suggests that SST anomalies were not significantly different from 0 from 1860 to 1900. This result is heavily influenced by the choice of the data corrections applied before the 1950s. Global anomalies are also near zero from 1940 until the mid-1970s. The OA analysis suggests that negative anomalies dominated the period from the early 1900s through the 1930s although the uncertainties are quite large during and immediately following World War

  5. Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.

    PubMed

    López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio

    2016-01-01

    Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550

  6. Statistical variability and confidence intervals for planar dose QA pass rates

    SciTech Connect

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B.

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  7. Confidence intervals for two sample means: Calculation, interpretation, and a few simple rules

    PubMed Central

    Pfister, Roland; Janczyk, Markus

    2013-01-01

    Valued by statisticians, enforced by editors, and confused by many authors, standard errors (SEs) and confidence intervals (CIs) remain a controversial issue in the psychological literature. This is especially true for the proper use of CIs for within-subjects designs, even though several recent publications elaborated on possible solutions for this case. The present paper presents a short and straightforward introduction to the basic principles of CI construction, in an attempt to encourage students and researchers in cognitive psychology to use CIs in their reports and presentations. Focusing on a simple but prevalent case of statistical inference, the comparison of two sample means, we describe possible CIs for between- and within-subjects designs. In addition, we give hands-on examples of how to compute these CIs and discuss their relation to classical t-tests. PMID:23826038

  8. BootES: an R package for bootstrap confidence intervals on effect sizes.

    PubMed

    Kirby, Kris N; Gerlanc, Daniel

    2013-12-01

    Bootstrap Effect Sizes (bootES; Gerlanc & Kirby, 2012) is a free, open-source software package for R (R Development Core Team, 2012), which is a language and environment for statistical computing. BootES computes both unstandardized and standardized effect sizes (such as Cohen's d, Hedges's g, and Pearson's r) and makes easily available for the first time the computation of their bootstrap confidence intervals (CIs). In this article, we illustrate how to use bootES to find effect sizes for contrasts in between-subjects, within-subjects, and mixed factorial designs and to find bootstrap CIs for correlations and differences between correlations. An appendix gives a brief introduction to R that will allow readers to use bootES without having prior knowledge of R.

  9. Comparing the toxicity of two drugs in the framework of spontaneous reporting: a confidence interval approach.

    PubMed

    Tubert-Bitter, P; Begaud, B; Moride, Y; Chaslerie, A; Haramburu, F

    1996-01-01

    Spontaneous reporting remains the most frequently used technique in post-marketing surveillance. Decision-making usually depends on comparisons between the number of adverse drug reactions (ADRs) reported for two drugs on the basis of an equivalent number of prescriptions. The validity of such comparisons is expected to be jeopardized by probable underreporting ADR cases. This problem is accentuated when it cannot be assumed that the magnitude of underreporting is the same for the both drugs. Differences in reporting ratios can overemphasize, cancel, or reverse the conclusions of a statistical comparison based on the number of reports. We propose a single method for (1) calculating confidence intervals for relative risks estimated in the context of spontaneous reporting and (2) deriving the range of reporting ratios for which the conclusion of the statistical comparison remains statistically valid. PMID:8598505

  10. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  11. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  12. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals.

    PubMed

    Kelley, Ken; Lai, Keke

    2011-02-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively small range at the specified level of confidence. The accuracy in parameter estimation approach to sample size planning is developed for the RMSEA so that the confidence interval for the population RMSEA will have a width whose expectation is sufficiently narrow. Analytic developments are shown to work well with a Monte Carlo simulation study. Freely available computer software is developed so that the methods discussed can be implemented. The methods are demonstrated for a repeated measures design where the way in which social relationships and initial depression influence coping strategies and later depression are examined.

  13. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  14. Confidence intervals after multiple imputation: combining profile likelihood information from logistic regressions.

    PubMed

    Heinze, Georg; Ploner, Meinhard; Beyea, Jan

    2013-12-20

    In the logistic regression analysis of a small-sized, case-control study on Alzheimer's disease, some of the risk factors exhibited missing values, motivating the use of multiple imputation. Usually, Rubin's rules (RR) for combining point estimates and variances would then be used to estimate (symmetric) confidence intervals (CIs), on the assumption that the regression coefficients were distributed normally. Yet, rarely is this assumption tested, with or without transformation. In analyses of small, sparse, or nearly separated data sets, such symmetric CI may not be reliable. Thus, RR alternatives have been considered, for example, Bayesian sampling methods, but not yet those that combine profile likelihoods, particularly penalized profile likelihoods, which can remove first order biases and guarantee convergence of parameter estimation. To fill the gap, we consider the combination of penalized likelihood profiles (CLIP) by expressing them as posterior cumulative distribution functions (CDFs) obtained via a chi-squared approximation to the penalized likelihood ratio statistic. CDFs from multiple imputations can then easily be averaged into a combined CDF c , allowing confidence limits for a parameter β  at level 1 - α to be identified as those β* and β** that satisfy CDF c (β*) = α ∕ 2 and CDF c (β**) = 1 - α ∕ 2. We demonstrate that the CLIP method outperforms RR in analyzing both simulated data and data from our motivating example. CLIP can also be useful as a confirmatory tool, should it show that the simpler RR are adequate for extended analysis. We also compare the performance of CLIP to Bayesian sampling methods using Markov chain Monte Carlo. CLIP is available in the R package logistf. PMID:23873477

  15. Applications of asymptotic confidence intervals with continuity corrections for asymmetric comparisons in noninferiority trials.

    PubMed

    Soulakova, Julia N; Bright, Brianna C

    2013-01-01

    A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned.

  16. You Seem Certain but You Were Wrong Before: Developmental Change in Preschoolers’ Relative Trust in Accurate versus Confident Speakers

    PubMed Central

    Brosseau-Liard, Patricia; Cassels, Tracy; Birch, Susan

    2014-01-01

    The present study tested how preschoolers weigh two important cues to a person’s credibility, namely prior accuracy and confidence, when deciding what to learn and believe. Four- and 5-year-olds (N = 96) preferred to believe information provided by a confident rather than hesitant individual; however, when confidence conflicted with accuracy, preschoolers increasingly favored information from the previously accurate but hesitant individual as they aged. These findings reveal an important developmental progression in how children use others’ confidence and prior accuracy to shape what they learn and provide a window into children’s developing social cognition, scepticism, and critical thinking. PMID:25254553

  17. A comparison of methods for the construction of confidence interval for relative risk in stratified matched-pair designs.

    PubMed

    Tang, Nian-Sheng; Li, Hui-Qiong; Tang, Man-Lai

    2010-01-15

    A stratified matched-pair study is often designed for adjusting a confounding effect or effect of different trails/centers/ groups in modern medical studies. The relative risk is one of the most frequently used indices in comparing efficiency of two treatments in clinical trials. In this paper, we propose seven confidence interval estimators for the common relative risk and three simultaneous confidence interval estimators for the relative risks in stratified matched-pair designs. The performance of the proposed methods is evaluated with respect to their type I error rates, powers, coverage probabilities, and expected widths. Our empirical results show that the percentile bootstrap confidence interval and bootstrap-resampling-based Bonferroni simultaneous confidence interval behave satisfactorily for small to large sample sizes in the sense that (i) their empirical coverage probabilities can be well controlled around the pre-specified nominal confidence level with reasonably shorter confidence widths; and (ii) the empirical type I error rates of their associated test statistics are generally closer to the pre-specified nominal level with larger powers. They are hence recommended. Two real examples from clinical laboratory studies are used to illustrate the proposed methodologies.

  18. Confidence Intervals Permit, but Do Not Guarantee, Better Inference than Statistical Significance Testing

    PubMed Central

    Coulson, Melissa; Healey, Michelle; Fidler, Fiona; Cumming, Geoff

    2010-01-01

    A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST. PMID:21607077

  19. Confidence interval procedures for system reliability and applications to competing risks models.

    PubMed

    Hong, Yili; Meeker, William Q

    2014-04-01

    System reliability depends on the reliability of the system's components and the structure of the system. For example, in a competing risks model, the system fails when the weakest component fails. The reliability function and the quantile function of a complicated system are two important metrics for characterizing the system's reliability. When there are data available at the component level, the system reliability can be estimated by using the component level information. Confidence intervals (CIs) are needed to quantify the statistical uncertainty in the estimation. Obtaining system reliability CI procedures with good properties is not straightforward, especially when the system structure is complicated. In this paper, we develop a general procedure for constructing a CI for the system failure-time quantile function by using the implicit delta method. We also develop general procedures for constructing a CI for the cumulative distribution function (cdf) of the system. We show that the recommended procedures are asymptotically valid and have good statistical properties. We conduct simulations to study the finite-sample coverage properties of the proposed procedures and compare them with existing procedures. We apply the proposed procedures to three applications; two applications in competing risks models and an application with a k-out-of-s system. The paper concludes with some discussion and an outline of areas for future research.

  20. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  1. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  2. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  3. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken

    2008-01-01

    Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…

  4. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  5. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  6. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  7. Proposal and validation of a method to construct confidence intervals for clinical outcomes around FROC curves for mammography CAD systems

    NASA Astrophysics Data System (ADS)

    Bornefalk, Hans

    2005-04-01

    This paper introduces a method for constructing confidence intervals for possible clinical outcomes around the FROC curve of a mammography CAD system. Given the architecture of a CAD classifying machine, there is one and only one system threshold that will yield a desired sensitivity on a certain population. The limited training sample size leads to a sampling error and an uncertainty in determining the optimal system threshold. This leads to an uncertainty in the operating point in the direction along the FROC curve which can be captured by a Bayesian approach where the distribution of possible thresholds is estimated. This uncertainty contributes to a large and spread-out confidence interval which is important to consider when one is intending to make comparisons between CAD algorithms trained on different data sets. The method is validated using a Monte Carlo method designed to capture the effect of correctly determining the system threshold.

  8. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

    PubMed

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-12-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

  9. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  10. Confidence intervals for similarity values determined for clonedSSU rRNA genes from environmental samples

    SciTech Connect

    Fields, M.W.; Schryver, J.C.; Brandt, C.C.; Yan, T.; Zhou, J.Z.; Palumbo, A.V.

    2007-04-02

    The goal of this research was to investigate the influenceof the error rate of sequence determination on the differentiation ofcloned SSU rRNA gene sequences for assessment of community structure. SSUrRNA cloned sequences from groundwater samples that represent differentbacterial divisions were sequenced multiple times with the samesequencing primer. From comparison of sequence alignments with unediteddata, confidence intervals were obtained from both a adouble binomial Tmodel of sequence comparison and by non-parametric methods. The resultsindicated that similarity values below 0.9946 arelikely derived fromdissimilar sequences at a confidence level of 0.95, and not sequencingerrors. The results confirmed that screening by direct sequencedetermination could be reliably used to differentiate at the specieslevel. However, given sequencing errors comparable to those seen in thisstudy, sequences with similarities above 0.9946 should be treated as thesame sequence if a 95 percent confidence is desired.

  11. Curriculum-based measurement of oral reading: A preliminary investigation of confidence interval overlap to detect reliable growth.

    PubMed

    Van Norman, Ethan R

    2016-09-01

    Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R progress monitoring data were not intended for such high stakes assessments. Numerous documented issues with trend line estimation undermine the validity of using slope estimates to infer progress. One proposed recommendation is to use confidence interval overlap as a means of judging reliable growth. This project explored the degree to which confidence interval overlap was related to true growth magnitude using simulation methodology. True and observed CBM-R scores were generated across 7 durations of data collection (range 6-18 weeks), 3 levels of dataset quality or residual variance (5, 10, and 15 words read correct per minute) and 2 types of data collection schedules. Descriptive and inferential analyses were conducted to explore interactions between overlap status, progress monitoring scenarios, and true growth magnitude. A small but statistically significant interaction was observed between overlap status, duration, and dataset quality, b = -0.004, t(20992) =-7.96, p < .001. In general, confidence interval overlap does not appear to meaningfully account for variance in true growth across many progress monitoring conditions. Implications for research and practice are discussed. Limitations and directions for future research are addressed. (PsycINFO Database Record

  12. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.

  13. Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2010-01-01

    The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…

  14. A program for confidence interval calculations for a Poisson process with background including systematic uncertainties: POLE 1.0

    NASA Astrophysics Data System (ADS)

    Conrad, Jan

    2004-04-01

    A Fortran 77 routine has been developed to calculate confidence intervals with and without systematic uncertainties using a frequentist confidence interval construction with a Bayesian treatment of the systematic uncertainties. The routine can account for systematic uncertainties in the background prediction and signal/background efficiencies. The uncertainties may be separately parametrized by a Gauss, log-normal or flat probability density function (PDF), though since a Monte Carlo approach is chosen to perform the necessary integrals a generalization to other parameterizations is particularly simple. Full correlation between signal and background efficiency is optional. The ordering schemes for frequentist construction currently supported are the likelihood ratio ordering (also known as Feldman-Cousins) and Neyman ordering. Optionally, both schemes can be used with conditioning, meaning the probability density function is conditioned on the fact that the actual outcome of the background process can not have been larger than the number of observed events. Program summaryTitle of program: POLE version 1.0 Catalogue identifier: ADTA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTA Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Computer for which the program is designed: DELL PC 1 GB 2.0 Ghz Pentium IV Operating system under which the program has been tested: RH Linux 7.2 Kernel 2.4.7-10 Programming language used: Fortran 77 Memory required to execute with typical data: ˜1.6 Mbytes No. of bytes in distributed program, including test data, etc.: 373745 No. of lines in distributed program, including test data, etc.: 2700 Distribution format: tar gzip file Keywords: Confidence interval calculation, Systematic uncertainties Nature of the physical problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with known background in presence of

  15. Replication, "p[subscript rep]," and Confidence Intervals: Comment Prompted by Iverson, Wagenmakers, and Lee (2010); Lecoutre, Lecoutre, and Poitevineau (2010); and Maraun and Gabriel (2010)

    ERIC Educational Resources Information Center

    Cumming, Geoff

    2010-01-01

    This comment offers three descriptions of "p[subscript rep]" that start with a frequentist account of confidence intervals, draw on R. A. Fisher's fiducial argument, and do not make Bayesian assumptions. Links are described among "p[subscript rep]," "p" values, and the probability a confidence interval will capture the mean of a replication…

  16. Statistical damage detection method for frame structures using a confidence interval

    NASA Astrophysics Data System (ADS)

    Li, Weiming; Zhu, Hongping; Luo, Hanbin; Xia, Yong

    2010-03-01

    A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.

  17. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  18. Confidence intervals for the difference between independent binomial proportions: comparison using a graphical approach and moving averages.

    PubMed

    Laud, Peter J; Dane, Aaron

    2014-01-01

    This paper uses graphical methods to illustrate and compare the coverage properties of a number of methods for calculating confidence intervals for the difference between two independent binomial proportions. We investigate both small-sample and large-sample properties of both two-sided and one-sided coverage, with an emphasis on asymptotic methods. In terms of aligning the smoothed coverage probability surface with the nominal confidence level, we find that the score-based methods on the whole have the best two-sided coverage, although they have slight deficiencies for confidence levels of 90% or lower. For an easily taught, hand-calculated method, the Brown-Li 'Jeffreys' method appears to perform reasonably well, and in most situations, it has better one-sided coverage than the widely recommended alternatives. In general, we find that the one-sided properties of many of the available methods are surprisingly poor. In fact, almost none of the existing asymptotic methods achieve equal coverage on both sides of the interval, even with large sample sizes, and consequently if used as a non-inferiority test, the type I error rate (which is equal to the one-sided non-coverage probability) can be inflated. The only exception is the Gart-Nam 'skewness-corrected' method, which we express using modified notation in order to include a bias correction for improved small-sample performance, and an optional continuity correction for those seeking more conservative coverage. Using a weighted average of two complementary methods, we also define a new hybrid method that almost matches the performance of the Gart-Nam interval.

  19. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-01-01

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision. PMID:23913390

  20. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation.

  1. Estimating incremental cost-effectiveness ratios and their confidence intervals with different terminating events for survival time and costs.

    PubMed

    Chen, Shuai; Zhao, Hongwei

    2013-07-01

    Cost-effectiveness analysis (CEA) is an important component of the economic evaluation of new treatment options. In many clinical and observational studies of costs, censored data pose challenges to the CEA. We consider a special situation where the terminating events for the survival time and costs are different. Traditional methods for statistical inference offer no means for dealing with censored data in these circumstances. To address this gap, we propose a new method for deriving the confidence interval for the incremental cost-effectiveness ratio. The simulation studies and real data example show that our method performs very well for some practical settings, revealing a great potential for application to actual settings in which terminating events for the survival time and costs differ.

  2. Solar PV power generation forecasting using hybrid intelligent algorithms and uncertainty quantification based on bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    AlHakeem, Donna Ibrahim

    This thesis focuses on short-term photovoltaic forecasting (STPVF) for the power generation of a solar PV system using probabilistic forecasts and deterministic forecasts. Uncertainty estimation, in the form of a probabilistic forecast, is emphasized in this thesis to quantify the uncertainties of the deterministic forecasts. Two hybrid intelligent models are proposed in two separate chapters to perform the STPVF. In Chapter 4, the framework of the deterministic proposed hybrid intelligent model is presented, which is a combination of wavelet transform (WT) that is a data filtering technique and a soft computing model (SCM) that is generalized regression neural network (GRNN). Additionally, this chapter proposes a model that is combined as WT+GRNN and is utilized to conduct the forecast of two random days in each season for 1-hour-ahead to find the power generation. The forecasts are analyzed utilizing accuracy measures equations to determine the model performance and compared with another SCM. In Chapter 5, the framework of the proposed model is presented, which is a combination of WT, a SCM based on radial basis function neural network (RBFNN), and a population-based stochastic particle swarm optimization (PSO). Chapter 5 proposes a model combined as a deterministic approach that is represented as WT+RBFNN+PSO, and then a probabilistic forecast is conducted utilizing bootstrap confidence intervals to quantify uncertainty from the output of WT+RBFNN+PSO. In Chapter 5, the forecasts are conducted by furthering the tests done in Chapter 4. Chapter 5 forecasts the power generation of two random days in each season for 1-hour-ahead, 3-hour-ahead, and 6-hour-ahead. Additionally, different types of days were also forecasted in each season such as a sunny day (SD), cloudy day (CD), and a rainy day (RD). These forecasts were further analyzed using accuracy measures equations, variance and uncertainty estimation. The literature that is provided supports that the proposed

  3. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014).

    PubMed

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  4. Limitation of individual internal exposure by consideration of the confidence interval in routine personal dosimetry at the Chernobyl Sarcophagus.

    PubMed

    Bondarenko, O O; Melnychuk, D V; Medvedev, S Yu

    2003-01-01

    In view of the probabilistic nature and very wide uncertainty of internal exposure assessment, its deterministic ('precise') assessment does not protect against not exceeding established reference levels or even the dose limits for a particular individual. Minimising such potential risks can be achieved by setting up a sufficiently wide confidence interval for an expected dose distribution instead of its average ('best' estimate) value, and by setting the limit at the 99% fractile level. The ratio of the 99% level and the mean ('best' estimate) is referred to as the safety coefficient. It is shown for the typical radiological conditions inside the Chernobyl Sarcophagus that the safety coefficient corresponding to the 99% fractile of the expected internal dose distribution varies within the range from 5 to 10. The maintenance of minimum uncertainty and sufficient sensitivity of the indirect dosimetry method requires measurement of individual daily urinary excretion of 239Pu at a level of at least 4 x 10(-5) Bq. For the purpose of reducing the uncertainty of individual internal dose assessment and making dosimetric methods workable. it is suggested that the results of workplace monitoring are combined with the results of periodic urinary and faecal bioassay measurements.

  5. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014).

    PubMed

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed.

  6. Temperature dependence of the rate and activation parameters for tert-butyl chloride solvolysis: Monte Carlo simulation of confidence intervals

    NASA Astrophysics Data System (ADS)

    Sung, Dae Dong; Kim, Jong-Youl; Lee, Ikchoon; Chung, Sung Sik; Park, Kwon Ha

    2004-07-01

    The solvolysis rate constants ( kobs) of tert-butyl chloride are measured in 20%(v/v) 2-PrOH-H 2O mixture at 15 temperatures ranging from 0 to 39 °C. Examination of the temperature dependence of the rate constants by the weighted least squares fitting to two to four terms equations has led to the three-term form, ln kobs= a1+ a2T-1+ a3ln T, as the best expression. The activation parameters, ΔH ‡ and ΔS ‡, calculated by using three constants a1, a2 and a3 revealed the steady decrease of ≈1 kJ mol -1 per degree and 3.5 J K -1 mol -1 per degree, respectively, as the temperature rises. The sign change of ΔS ‡ at ≈20.0 °C and the large negative heat capacity of activation, ΔC p‡=-1020 J K -1 mol -1, derived are interpreted to indicate an S N1 mechanism and a net change from water structure breaking to electrostrictive solvation due to the partially ionic transition state. Confidence intervals estimated by the Monte Carlo method are far more precise than those by the conventional method.

  7. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157–1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  8. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies.

    PubMed

    Parks, Nathan A; Gannon, Matthew A; Long, Stephanie M; Young, Madeleine E

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB ) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets.

  9. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    PubMed Central

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  10. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    NASA Astrophysics Data System (ADS)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  11. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

    SciTech Connect

    ROMERO,VICENTE J.

    2000-05-04

    In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

  12. Effect of Minimum Cell Sizes and Confidence Interval Sizes for Special Education Subgroups on School-Level AYP Determinations. Synthesis Report 61

    ERIC Educational Resources Information Center

    Simpson, Mary Ann; Gong, Brian; Marion, Scott

    2006-01-01

    This study addresses three questions: First, considering the full group of students and the special education subgroup, what is the likely effect of minimum cell size and confidence interval size on school-level Adequate Yearly Progress (AYP) determinations? Second, what effects do the changing minimum cell sizes have on inclusion of special…

  13. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  14. Confidence interval estimation for an empirical model quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...

  15. Investigating the effect of modeling single-vehicle and multi-vehicle crashes separately on confidence intervals of Poisson-gamma models.

    PubMed

    Geedipally, Srinivas Reddy; Lord, Dominique

    2010-07-01

    Crash prediction models still constitute one of the primary tools for estimating traffic safety. These statistical models play a vital role in various types of safety studies. With a few exceptions, they have often been employed to estimate the number of crashes per unit of time for an entire highway segment or intersection, without distinguishing the influence different sub-groups have on crash risk. The two most important sub-groups that have been identified in the literature are single- and multi-vehicle crashes. Recently, some researchers have noted that developing two distinct models for these two categories of crashes provides better predicting performance than developing models combining both crash categories together. Thus, there is a need to determine whether a significant difference exists for the computation of confidence intervals when a single model is applied rather than two distinct models for single- and multi-vehicle crashes. Building confidence intervals have many important applications in highway safety. This paper investigates the effect of modeling single- and multi-vehicle (head-on and rear-end only) crashes separately versus modeling them together on the prediction of confidence intervals of Poisson-gamma models. Confidence intervals were calculated for total (all severities) crash models and fatal and severe injury crash models. The data used for the comparison analysis were collected on Texas multilane undivided highways for the years 1997-2001. This study shows that modeling single- and multi-vehicle crashes separately predicts larger confidence intervals than modeling them together as a single model. This difference is much larger for fatal and injury crash models than for models for all severity levels. Furthermore, it is found that the single- and multi-vehicle crashes are not independent. Thus, a joint (bivariate) model which accounts for correlation between single- and multi-vehicle crashes is developed and it predicts wider

  16. Accurate segmentation of leukocyte in blood cell images using Atanassov's intuitionistic fuzzy and interval Type II fuzzy set theory.

    PubMed

    Chaira, Tamalika

    2014-06-01

    In this paper automatic leukocyte segmentation in pathological blood cell images is proposed using intuitionistic fuzzy and interval Type II fuzzy set theory. This is done to count different types of leukocytes for disease detection. Also, the segmentation should be accurate so that the shape of the leukocytes is preserved. So, intuitionistic fuzzy set and interval Type II fuzzy set that consider either more number of uncertainties or a different type of uncertainty as compared to fuzzy set theory are used in this work. As the images are considered fuzzy due to imprecise gray levels, advanced fuzzy set theories may be expected to give better result. A modified Cauchy distribution is used to find the membership function. In intuitionistic fuzzy method, non-membership values are obtained using Yager's intuitionistic fuzzy generator. Optimal threshold is obtained by minimizing intuitionistic fuzzy divergence. In interval type II fuzzy set, a new membership function is generated that takes into account the two levels in Type II fuzzy set using probabilistic T co norm. Optimal threshold is selected by minimizing a proposed Type II fuzzy divergence. Though fuzzy techniques were applied earlier but these methods failed to threshold multiple leukocytes in images. Experimental results show that both interval Type II fuzzy and intuitionistic fuzzy methods perform better than the existing non-fuzzy/fuzzy methods but interval Type II fuzzy thresholding method performs little bit better than intuitionistic fuzzy method. Segmented leukocytes in the proposed interval Type II fuzzy method are observed to be distinct and clear.

  17. Confidence intervals for time averages in the presence of long-range correlations, a case study on Earth surface temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, M.; Kantz, H.

    2016-09-01

    Time averages, a standard tool in the analysis of environmental data, suffer severely from long-range correlations. The sample size needed to obtain a desired small confidence interval can be dramatically larger than for uncorrelated data. We present quantitative results for short- and long-range correlated Gaussian stochastic processes. Using these, we calculate confidence intervals for time averages of surface temperature measurements. Temperature time series are well known to be long-range correlated with Hurst exponents larger than 1/2. Multidecadal time averages are routinely used in the study of climate change. Our analysis shows that uncertainties of such averages are as large as for a single year of uncorrelated data.

  18. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  19. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    ERIC Educational Resources Information Center

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  20. Measurement of the severity of disability in community-dwelling adults and older adults: interval-level measures for accurate comparisons in large survey data sets

    PubMed Central

    Buz, José; Cortés-Rodríguez, María

    2016-01-01

    Objectives To (1) create a single metric of disability using Rasch modelling to be used for comparing disability severity levels across groups and countries, (2) test whether the interval-level measures were invariant across countries, sociodemographic and health variables and (3) examine the gains in precision using interval-level measures relative to ordinal scores when discriminating between groups known to differ in disability. Design Cross-sectional, population-based study. Setting/participants Data were drawn from the Survey of Health, Ageing and Retirement in Europe (SHARE), including comparable data across 16 countries and involving 58 489 community-dwelling adults aged 50+. Main outcome measures A single metric of disability composed of self-care and instrumental activities of daily living (IADLs) and functional limitations. We examined the construct validity through the fit to the Rasch model and the know-groups method. Reliability was examined using person separation reliability. Results The single metric fulfilled the requirements of a strong hierarchical scale; was able to separate persons with different levels of disability; demonstrated invariance of the item hierarchy across countries; and was unbiased by age, gender and different health conditions. However, we found a blurred hierarchy of ADL and IADL tasks. Rasch-based measures yielded gains in relative precision (11–116%) in discriminating between groups with different medical conditions. Conclusions Equal-interval measures, with person-invariance and item-invariance properties, provide epidemiologists and researchers with the opportunity to gain better insight into the hierarchical structure of functional disability, and yield more reliable and accurate estimates of disability across groups and countries. Interval-level measures of disability allow parametric statistical analysis to confidently examine the relationship between disability and continuous measures so frequent in health sciences

  1. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    ERIC Educational Resources Information Center

    Paek, Insu

    2016-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

  2. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  3. On the Proper Estimation of the Confidence Interval for the Design Formula of Blast-Induced Vibrations with Site Records

    NASA Astrophysics Data System (ADS)

    Yan, W. M.; Yuen, Ka-Veng

    2015-01-01

    Blast-induced ground vibration has received much engineering and public attention. The vibration is often represented by the peak particle velocity (PPV) and the empirical approach is employed to describe the relationship between the PPV and the scaled distance. Different statistical methods are often used to obtain the confidence level of the prediction. With a known scaled distance, the amount of explosives in a planned blast can then be determined by a blast engineer when the PPV limit and the confidence level of the vibration magnitude are specified. This paper shows that these current approaches do not incorporate the posterior uncertainty of the fitting coefficients. In order to resolve this problem, a Bayesian method is proposed to derive the site-specific fitting coefficients based on a small amount of data collected at an early stage of a blasting project. More importantly, uncertainty of both the fitting coefficients and the design formula can be quantified. Data collected from a site formation project in Hong Kong is used to illustrate the performance of the proposed method. It is shown that the proposed method resolves the underestimation problem in one of the conventional approaches. The proposed approach can be easily conducted using spreadsheet calculation without the need for any additional tools, so it will be particularly welcome by practicing engineers.

  4. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  5. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  6. Detection of anomalous diffusion using confidence intervals of the scaling exponent with application to preterm neonatal heart rate variability

    NASA Astrophysics Data System (ADS)

    Bickel, David R.; Verklan, M. Terese; Moon, Jon

    1998-11-01

    The scaling exponent of the root mean square (rms) displacement quantifies the roughness of fractal or multifractal time series; it is equivalent to other second-order measures of scaling, such as the power-law exponents of the spectral density and autocorrelation function. For self-similar time series, the rms scaling exponent equals the Hurst parameter, which is related to the fractal dimension. A scaling exponent of 0.5 implies that the process is normal diffusion, which is equivalent to an uncorrelated random walk; otherwise, the process can be modeled as anomalous diffusion. Higher exponents indicate that the increments of the signal have positive correlations, while exponents below 0.5 imply that they have negative correlations. Scaling exponent estimates of successive segments of the increments of a signal are used to test the null hypothesis that the signal is normal diffusion, with the alternate hypothesis that the diffusion is anomalous. Dispersional analysis, a simple technique which does not require long signals, is used to estimate the scaling exponent from the slope of the linear regression of the logarithm of the standard deviation of binned data points on the logarithm of the number of points per bin. Computing the standard error of the scaling exponent using successive segments of the signal is superior to previous methods of obtaining the standard error, such as that based on the sum of squared errors used in the regression; the regression error is more of a measure of the deviation from power-law scaling than of the uncertainty of the scaling exponent estimate. Applying this test to preterm neonate heart rate data, it is found that time intervals between heart beats can be modeled as anomalous diffusion with negatively correlated increments. This corresponds to power spectra between 1/f2 and 1/f, whereas healthy adults are usually reported to have 1/f spectra, suggesting that the immaturity of the neonatal nervous system affects the scaling

  7. User guide to the UNC process and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

    USGS Publications Warehouse

    Christensen, Steen; Cooley, Richard L.

    2006-01-01

    This report introduces and documents the Uncertainty (UNC) Process, a new Process in MODFLOW-2000 that calculates uncertainty measures for model parameters and for predictions produced by the model. Uncertainty measures can be computed by various methods, but when regression is applied to calibrate a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence and prediction intervals for most types of functions that can be computed by a MODFLOW-2000 model calibrated by the Parameter-Estimation Process. The types of functions for which the Process works include hydraulic heads, hydraulic head differences, head-dependent flows computed by the head-dependent flow packages for drains (DRN6), rivers (RIV6), general-head boundaries (GHB6), streams (STR6), drain-return cells (DRT1), and constant-head boundaries (CHD), and for differences between flows computed by any of the mentioned flow packages. The UNC Process does not allow computation of intervals for the difference between flows computed by two different flow packages. The report also documents three programs, RESAN2-2k, BEALE2-2k, and CORFAC-2k, which are valuable for the evaluation of results from the Parameter-Estimation Process and for the preparation of input values for the UNC Process. RESAN2-2k and BEALE2-2k are significant updates of the residual analysis and modified Beale's measure programs first published by Cooley and Naff (1990) and later modified for use with MODFLOWP (Hill, 1994) and MODFLOW-2000 (Hill and others, 2000). CORFAC-2k is a new program that computes correction factors to be used by UNC.

  8. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample. PMID:12220133

  9. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample.

  10. Dual electrospray ionization source for confident generation of accurate mass tags using liquid chromatography Fourier transform ion cyclotron resonance mass spectrometry.

    PubMed

    Nepomuceno, Angelito I; Muddiman, David C; Bergen, H Robert; Craighead, James R; Burke, Michael J; Caskey, Patrick E; Allan, Jonathan A

    2003-07-15

    resulting in mass accuracies of 1.08 ppm +/- 0.11 ppm (mean +/- confidence interval of the mean at 95% confidence; N = 160). In addition, the analysis of a tryptic digest of apomyoglobin by nanoLC-dual ESI-FT-ICR afforded an average MMA of -1.09 versus -74.5 ppm for externally calibrated data. Furthermore, we demonstrate that the amplitude of a peptide being electrosprayed at 25 nM can be linearly increased, ultimately allowing for dynamic analyte/IMC abundance modulation. Finally, we demonstrate that this source can reliably be used for multiplexing measurements from two (eventually more) flow streams.

  11. Confidence bounds on structural reliability

    NASA Technical Reports Server (NTRS)

    Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

    1993-01-01

    Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

  12. Confidence bounds on structural reliability

    NASA Astrophysics Data System (ADS)

    Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

    1993-04-01

    Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

  13. Accurate determination of the fine-structure intervals in the 3P ground states of C-13 and C-12 by far-infrared laser magnetic resonance

    NASA Technical Reports Server (NTRS)

    Cooksy, A. L.; Saykally, R. J.; Brown, J. M.; Evenson, K. M.

    1986-01-01

    Accurate values are presented for the fine-structure intervals in the 3P ground state of neutral atomic C-12 and C-13 as obtained from laser magnetic resonance spectroscopy. The rigorous analysis of C-13 hyperfine structure, the measurement of resonant fields for C-12 transitions at several additional far-infrared laser frequencies, and the increased precision of the C-12 measurements, permit significant improvement in the evaluation of these energies relative to earlier work. These results will expedite the direct and precise measurement of these transitions in interstellar sources and should assist in the determination of the interstellar C-12/C-13 abundance ratio.

  14. Generating confidence intervals on biological networks

    PubMed Central

    Thorne, Thomas; Stumpf, Michael PH

    2007-01-01

    Background In the analysis of networks we frequently require the statistical significance of some network statistic, such as measures of similarity for the properties of interacting nodes. The structure of the network may introduce dependencies among the nodes and it will in general be necessary to account for these dependencies in the statistical analysis. To this end we require some form of Null model of the network: generally rewired replicates of the network are generated which preserve only the degree (number of interactions) of each node. We show that this can fail to capture important features of network structure, and may result in unrealistic significance levels, when potentially confounding additional information is available. Methods We present a new network resampling Null model which takes into account the degree sequence as well as available biological annotations. Using gene ontology information as an illustration we show how this information can be accounted for in the resampling approach, and the impact such information has on the assessment of statistical significance of correlations and motif-abundances in the Saccharomyces cerevisiae protein interaction network. An algorithm, GOcardShuffle, is introduced to allow for the efficient construction of an improved Null model for network data. Results We use the protein interaction network of S. cerevisiae; correlations between the evolutionary rates and expression levels of interacting proteins and their statistical significance were assessed for Null models which condition on different aspects of the available data. The novel GOcardShuffle approach results in a Null model for annotated network data which appears better to describe the properties of real biological networks. Conclusion An improved statistical approach for the statistical analysis of biological network data, which conditions on the available biological information, leads to qualitatively different results compared to approaches which ignore such annotations. In particular we demonstrate the effects of the biological organization of the network can be sufficient to explain the observed similarity of interacting proteins. PMID:18053130

  15. Confidence building

    NASA Astrophysics Data System (ADS)

    Roederer, Juan G.

    Many conferences are being held on confidence building in many countries. Usually they are organized and attended by political scientists and science policy specialists. A remarkable exception, in which the main brainstorming was done by “grass roots” geophysicists, nuclear physicists, engineers and ecologists, was a meeting in July at St. John's College in Santa Fe, N. Mex.The aim of the conference Technology-Based Confidence Building: Energy and Environment was to survey programs of international cooperation in pertinent areas of mutual concern to all nations and to identify new initiatives that could contribute to enhanced international stability, with emphasis on cooperation between the U.S. and U.S.S.R.

  16. Confidant Relations in Italy

    PubMed Central

    Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

    2015-01-01

    Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

  17. Confidant Relations in Italy.

    PubMed

    Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

    2015-02-01

    Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

  18. A neural-fuzzy model with confidence measure for controlled stressed-lap surface shape presentation

    NASA Astrophysics Data System (ADS)

    Chen, Minyou; Wan, Yongjian; Wu, Fan; Xie, Kaigui; Wang, Mingyu; Fan, Bin

    2009-05-01

    In computer controlled large aspheric mirror polishing process, it is crucially important to build an accurate stressed-lap surface model for shape control. It is desirable to provide a practical measure of prediction confidence to access the reliability of the resulting models. To build a reliable prediction model for representing the surface shape of stressed lap polishing process in large aperture and highly aspheric optical surface, this paper proposed a predictive model with its own confidence interval estimate based on a fuzzy neural network. The calculation of confidence interval accounts for the training data distribution and accuracy of the trained model with the given input-output data. Simulation results show that the proposed confidence interval estimation reflects the data distribution and extrapolation correctly, and works well in high-dimensional sparse data set of the detected stressed lap surface shape changes. The original data from the micro-displacement sensor matrix were used to train the neural network model. The experiment results showed that the proposed model can represent the surface shape of the stressed-lap accurately and facilitate the computer controlled optical polishing process.

  19. Application of Sequential Interval Estimation to Adaptive Mastery Testing

    ERIC Educational Resources Information Center

    Chang, Yuan-chin Ivan

    2005-01-01

    In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

  20. Understanding Academic Confidence

    ERIC Educational Resources Information Center

    Sander, Paul; Sanders, Lalage

    2006-01-01

    This paper draws on the psychological theories of self-efficacy and the self-concept to understand students' self-confidence in academic study in higher education as measured by the Academic Behavioural Confidence scale (ABC). In doing this, expectancy-value theory and self-efficacy theory are considered and contrasted with self-concept and…

  1. Confidence Intervals for Standardized Linear Contrasts of Means

    ERIC Educational Resources Information Center

    Bonnett, Douglas G.

    2008-01-01

    Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…

  2. Estimation of Confidence Intervals for Multiplication and Efficiency

    SciTech Connect

    Verbeke, J

    2009-07-17

    Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoretical count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.

  3. Technological Pedagogical Content Knowledge (TPACK) Literature Using Confidence Intervals

    ERIC Educational Resources Information Center

    Young, Jamaal R.; Young, Jemimah L.; Shaker, Ziad

    2012-01-01

    The validity and reliability of Technological Pedagogical Content Knowledge (TPACK) as a framework to measure the extent to which teachers can teach with technology hinges on the ability to aggregate results across empirical studies. The results of data collected using the survey of pre-service teacher knowledge of teaching with technology (TKTT)…

  4. Interval Training.

    ERIC Educational Resources Information Center

    President's Council on Physical Fitness and Sports, Washington, DC.

    Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

  5. Addressing the vaccine confidence gap.

    PubMed

    Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott

    2011-08-01

    Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines. PMID:21664679

  6. Addressing the vaccine confidence gap.

    PubMed

    Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott

    2011-08-01

    Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines.

  7. Responsibility and confidence

    PubMed Central

    Austin, Zubin

    2013-01-01

    Background: Despite the changing role of the pharmacist in patient-centred practice, pharmacists anecdotally reported little confidence in their clinical decision-making skills and do not feel responsible for their patients. Observational findings have suggested these trends within the profession, but there is a paucity of evidence to explain why. We conducted an exploratory study with an objective to identify reasons for the lack of responsibility and/or confidence in various pharmacy practice settings. Methods: Pharmacist interviews were conducted via written response, face-to-face or telephone. Seven questions were asked on the topic of responsibility and confidence as it applies to pharmacy practice and how pharmacists think these themes differ in medicine. Interview transcripts were analyzed and divided by common theme. Quotations to support these themes are presented. Results: Twenty-nine pharmacists were asked to participate, and 18 responded (62% response rate). From these interviews, 6 themes were identified as barriers to confidence and responsibility: hierarchy of the medical system, role definitions, evolution of responsibility, ownership of decisions for confidence building, quality and consequences of mentorship and personality traits upon admission. Discussion: We identified 6 potential barriers to the development of pharmacists’ self-confidence and responsibility. These findings have practical applicability for educational research, future curriculum changes, experiential learning structure and pharmacy practice. Due to bias and the limitations of this form of exploratory research and small sample size, evidence should be interpreted cautiously. Conclusion: Pharmacists feel neither responsible nor confident for their clinical decisions due to social, educational, experiential and personal reasons. Can Pharm J 2013;146:155-161. PMID:23795200

  8. Confidence Calculation with AMV+

    SciTech Connect

    Fossum, A.F.

    1999-02-19

    The iterative advanced mean value algorithm (AMV+), introduced nearly ten years ago, is now widely used as a cost-effective probabilistic structural analysis tool when the use of sampling methods is cost prohibitive (Wu et al., 1990). The need to establish confidence bounds on calculated probabilities arises because of the presence of uncertainties in measured means and variances of input random variables. In this paper an algorithm is proposed that makes use of the AMV+ procedure and analytically derived probability sensitivities to determine confidence bounds on calculated probabilities.

  9. Adding Confidence to Knowledge

    ERIC Educational Resources Information Center

    Goodson, Ludwika Aniela; Slater, Don; Zubovic, Yvonne

    2015-01-01

    A "knowledge survey" and a formative evaluation process led to major changes in an instructor's course and teaching methods over a 5-year period. Design of the survey incorporated several innovations, including: a) using "confidence survey" rather than "knowledge survey" as the title; b) completing an…

  10. Predicting Systemic Confidence

    ERIC Educational Resources Information Center

    Falke, Stephanie Inez

    2009-01-01

    Using a mixed method approach, this study explored which educational factors predicted systemic confidence in master's level marital and family therapy (MFT) students, and whether or not the impact of these factors was influenced by student beliefs and their perception of their supervisor's beliefs about the value of systemic practice. One hundred…

  11. SystemConfidence

    SciTech Connect

    Josh Lothian, Jeff Kuehn

    2012-09-25

    SystemConfidence is a benchmark developed at ORNL which can measure statistical variation in which the user can plot. The portions of the code which manage the collection of the histograms and computing statistics on the histograms were designed with the intent that we could use these functions in other codes.

  12. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  13. Accurate absolute reference frequencies from 1511 to 1545 nm of the {nu}{sub 1}+{nu}{sub 3} band of {sup 12}C{sub 2}H{sub 2} determined with laser frequency comb interval measurements

    SciTech Connect

    Madej, Alan A.; Alcock, A. John; Czajkowski, Andrzej; Bernard, John E.; Chepurov, Sergei

    2006-10-15

    Absolute frequency measurements, with uncertainties as low as 2 kHz (1x10{sup -11}), are presented for the {nu}{sub 1}+{nu}{sub 3} band of {sup 12}C{sub 2}H{sub 2} at 1.5 {mu}m (194-198 THz). The measurements were made using cavity-enhanced, diode-laser-based saturation spectroscopy. With one laser system stabilized to the P(16) line of {sup 13}C{sub 2}H{sub 2} and a system stabilized to the line in {sup 12}C{sub 2}H{sub 2} whose frequency was to be determined, a Cr:YAG laser-based frequency comb was employed to measure the frequency intervals. The systematic uncertainty is notably reduced relative to that of previous studies, and the region of measured lines has been extended. Improved molecular constants are obtained.

  14. Reclaim your creative confidence.

    PubMed

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with.

  15. Reclaim your creative confidence.

    PubMed

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with. PMID:23227579

  16. Anonymity Builds Artistic Confidence

    ERIC Educational Resources Information Center

    Lane, Susan L.

    2012-01-01

    The fear of embarrassment in middle- and high-school students often inhibits their attempts at drawing realistically. Many find it difficult to reproduce what they see accurately, and as a result, complain, act out or refuse to do the task in order to save face. In this article, the author describes a lesson that does three things: (1) it attempts…

  17. Improved investor confidence

    SciTech Connect

    Anderson, J.

    1995-10-01

    Results of a financial ranking survey of power projects show reasonably strong activity when compared to previous surveys. Perhaps the most notable trend is the continued increase in the number of international deals being reported. Nearly 62 percent of the transactions reported were for non-US projects. This increase will likely expand with time as developers and lenders gain confidence in certain regions. For the remainder of 1995 and into 1996 it is likely that financial activity will continue at a steady pace. A number of projects in various markets are poised to reach financial close relatively soon. Developers, investment bankers, and governments are all gaining experience and becoming more comfortable with the process.

  18. Confidence in Numerical Simulations

    SciTech Connect

    Hemez, Francois M.

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  19. Confidence and Cognitive Test Performance

    ERIC Educational Resources Information Center

    Stankov, Lazar; Lee, Jihyun

    2008-01-01

    This article examines the nature of confidence in relation to abilities, personality, and metacognition. Confidence scores were collected during the administration of Reading and Listening sections of the Test of English as a Foreign Language Internet-Based Test (TOEFL iBT) to 824 native speakers of English. Those confidence scores were correlated…

  20. Monitoring tigers with confidence.

    PubMed

    Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark

    2010-12-01

    With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention.

  1. Monitoring tigers with confidence.

    PubMed

    Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark

    2010-12-01

    With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. PMID:21392352

  2. Students' intentions towards studying science at upper-secondary school: the differential effects of under-confidence and over-confidence

    NASA Astrophysics Data System (ADS)

    Sheldrake, Richard

    2016-05-01

    Understanding students' intentions to study science at upper-secondary school, at university, and to follow science careers continues as a central concern for international science education. Prior research has highlighted that students' science confidence has been associated with their intentions to study science further, although under-confidence and over-confidence (lower or higher confidence than expected, given someone's attainment) have not been considered in detail. Accordingly, this study explored whether under-confident, accurately evaluating, and over-confident students expressed different attitudes towards their science education, and explored how under-confidence and over-confidence might influence students' science intentions. The questionnaire responses of 1523 students from 12 secondary schools in England were considered through analysis of variance and predictive modelling. Under-confident students expressed consistently lower science attitudes than accurately evaluating and over-confident students, despite reporting the same science grades as accurately evaluating students. Students' intentions to study science were predicted by different factors in different ways, depending on whether the students were under-confident, accurate, or over-confident. For accurately evaluating and over-confident students, science intentions were predicted by their self-efficacy beliefs (their confidence in their expected future science attainment). For under-confident students, science intentions were predicted by their self-concept beliefs (their confidence in currently 'doing well' or 'being good' at science). Many other differences were also apparent. Fundamentally, under-confidence may be detrimental not simply through associating with lower attitudes, but through students considering their choices in different ways. Under-confidence may accordingly require attention to help ensure that students' future choices are not unnecessarily constrained.

  3. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  4. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  5. Why Aren't They Called Probability Intervals?

    ERIC Educational Resources Information Center

    Devlin, Thomas F.

    2008-01-01

    This article offers suggestions for teaching confidence intervals, a fundamental statistical tool often misinterpreted by beginning students. A historical perspective presenting the interpretation given by their inventor is supported with examples and the use of technology. A method for determining confidence intervals for the seldom-discussed…

  6. On how the brain decodes vocal cues about speaker confidence.

    PubMed

    Jiang, Xiaoming; Pell, Marc D

    2015-05-01

    In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by

  7. Measuring Vaccine Confidence: Introducing a Global Vaccine Confidence Index

    PubMed Central

    Larson, Heidi J; Schulz, William S; Tucker, Joseph D; Smith, David M D

    2015-01-01

    Background. Public confidence in vaccination is vital to the success of immunisation programmes worldwide. Understanding the dynamics of vaccine confidence is therefore of great importance for global public health. Few published studies permit global comparisons of vaccination sentiments and behaviours against a common metric. This article presents the findings of a multi-country survey of confidence in vaccines and immunisation programmes in Georgia, India, Nigeria, Pakistan, and the United Kingdom (UK) – these being the first results of a larger project to map vaccine confidence globally. Methods. Data were collected from a sample of the general population and from those with children under 5 years old against a core set of confidence questions. All surveys were conducted in the relevant local-language in Georgia, India, Nigeria, Pakistan, and the UK. We examine confidence in immunisation programmes as compared to confidence in other government health services, the relationships between confidence in the system and levels of vaccine hesitancy, reasons for vaccine hesitancy, ultimate vaccination decisions, and their variation based on country contexts and demographic factors. Results. The numbers of respondents by country were: Georgia (n=1000); India (n=1259); Pakistan (n=2609); UK (n=2055); Nigerian households (n=12554); and Nigerian health providers (n=1272). The UK respondents with children under five years of age were more likely to hesitate to vaccinate, compared to other countries. Confidence in immunisation programmes was more closely associated with confidence in the broader health system in the UK (Spearman’s ρ=0.5990), compared to Nigeria (ρ=0.5477), Pakistan (ρ=0.4491), and India (ρ=0.4240), all of which ranked confidence in immunisation programmes higher than confidence in the broader health system. Georgia had the highest rate of vaccine refusals (6 %) among those who reported initial hesitation. In all other countries surveyed most

  8. Assessment of cross-frequency coupling with confidence using generalized linear models

    PubMed Central

    Kramer, M. A.; Eden, U. T.

    2013-01-01

    Background Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact – and the function of these interactions – remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. New Method Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. Results We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. Comparison with Existing Methods Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. Conclusions The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC. PMID:24012829

  9. Mixed Confidence Estimation for Iterative CT Reconstruction.

    PubMed

    Perlmutter, David S; Kim, Soo Mee; Kinahan, Paul E; Alessio, Adam M

    2016-09-01

    Dynamic (4D) CT imaging is used in a variety of applications, but the two major drawbacks of the technique are its increased radiation dose and longer reconstruction time. Here we present a statistical analysis of our previously proposed Mixed Confidence Estimation (MCE) method that addresses both these issues. This method, where framed iterative reconstruction is only performed on the dynamic regions of each frame while static regions are fixed across frames to a composite image, was proposed to reduce computation time. In this work, we generalize the previous method to describe any application where a portion of the image is known with higher confidence (static, composite, lower-frequency content, etc.) and a portion of the image is known with lower confidence (dynamic, targeted, etc). We show that by splitting the image space into higher and lower confidence components, MCE can lower the estimator variance in both regions compared to conventional reconstruction. We present a theoretical argument for this reduction in estimator variance and verify this argument with proof-of-principle simulations. We also propose a fast approximation of the variance of images reconstructed with MCE and confirm that this approximation is accurate compared to analytic calculations of and multi-realization image variance. This MCE method requires less computation time and provides reduced image variance for imaging scenarios where portions of the image are known with more certainty than others allowing for potentially reduced radiation dose and/or improved dynamic imaging. PMID:27008663

  10. Confidant Relations of the Aged.

    ERIC Educational Resources Information Center

    Tigges, Leann M.; And Others

    The confidant relationship is a qualitatively distinct dimension of the emotional support system of the aged, yet the composition of the confidant network has been largely neglected in research on aging. Persons (N=940) 60 years of age and older were interviewed about their socio-environmental setting. From the enumeration of their relatives,…

  11. Subjective probability intervals: how to reduce overconfidence by interval evaluation.

    PubMed

    Winman, Anders; Hansson, Patrik; Juslin, Peter

    2004-11-01

    Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost abolished when they evaluate the probability that the same intervals include the quantity. The authors successfully apply a method for adaptive adjustment of probability intervals as a debiasing tool and discuss a tentative explanation in terms of a naive sampling model. According to this view, people report their experiences accurately, but they are naive in that they treat both sample proportion and sample dispersion as unbiased estimators, yielding small bias in probability evaluation but strong bias in interval production. PMID:15521796

  12. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  13. Confidence rating for eutrophication assessments.

    PubMed

    Brockmann, Uwe H; Topcu, Dilek H

    2014-05-15

    Confidence of monitoring data is dependent on their variability and representativeness of sampling in space and time. Whereas variability can be assessed as statistical confidence limits, representative sampling is related to equidistant sampling, considering gradients or changing rates at sampling gaps. By the proposed method both aspects are combined, resulting in balanced results for examples of total nitrogen concentrations in the German Bight/North Sea. For assessing sampling representativeness surface areas, vertical profiles and time periods are divided into regular sections for which individually the representativeness is calculated. The sums correspond to the overall representativeness of sampling in the defined area/time period. Effects of not sampled sections are estimated along parallel rows by reducing their confidence, considering their distances to next sampled sections and the interrupted gradients/changing rates. Confidence rating of time sections is based on maximum differences of sampling rates at regular time steps and related means of concentrations.

  14. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  15. Predictive intervals for age-specific fertility.

    PubMed

    Keilman, N; Pham, D Q

    2000-03-01

    A multivariate ARIMA model is combined with a Gamma curve to predict confidence intervals for age-specific birth rates by 1-year age groups. The method is applied to observed age-specific births in Norway between 1900 and 1995, and predictive intervals are computed for each year up to 2050. The predicted two-thirds confidence intervals for Total Fertility (TF) around 2010 agree well with TF errors in old population forecasts made by Statistics Norway. The method gives useful predictions for age-specific fertility up to the years 2020-30. For later years, the intervals become too wide. Methods that do not take into account estimation errors in the ARIMA model coefficients underestimate the uncertainty for future TF values. The findings suggest that the margin between high and low fertility variants in official population forecasts for many Western countries are too narrow. PMID:12158991

  16. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making

    PubMed Central

    Bahrami, Bahador; Latham, Peter E.

    2015-01-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people’s confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality. PMID:26517475

  17. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    PubMed

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality. PMID:26517475

  18. Assessing uncertainty in reference intervals via tolerance intervals: application to a mixed model describing HIV infection.

    PubMed

    Katki, Hormuzd A; Engels, Eric A; Rosenberg, Philip S

    2005-10-30

    We define the reference interval as the range between the 2.5th and 97.5th percentiles of a random variable. We use reference intervals to compare characteristics of a marker of disease progression between affected populations. We use a tolerance interval to assess uncertainty in the reference interval. Unlike the tolerance interval, the estimated reference interval does not contains the true reference interval with specified confidence (or credibility). The tolerance interval is easy to understand, communicate and visualize. We derive estimates of the reference interval and its tolerance interval for markers defined by features of a linear mixed model. Examples considered are reference intervals for time trends in HIV viral load, and CD4 per cent, in HIV-infected haemophiliac children and homosexual men. We estimate the intervals with likelihood methods and also develop a Bayesian model in which the parameters are estimated via Markov-chain Monte Carlo. The Bayesian formulation naturally overcomes some important limitations of the likelihood model. PMID:16189804

  19. Five-Year Risk of Interval-Invasive Second Breast Cancer

    PubMed Central

    Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

    2015-01-01

    Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

  20. Confidence-Based Feature Acquisition

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  1. The Sense of Confidence during Probabilistic Learning: A Normative Account

    PubMed Central

    Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas

    2015-01-01

    Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core

  2. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.

  3. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings.

  4. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings. PMID:26303026

  5. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  6. Overconfidence in Interval Estimates: What Does Expertise Buy You?

    ERIC Educational Resources Information Center

    McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

    2008-01-01

    People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

  7. The effect of certain rater roles on confidence in physician's assistant ratings.

    PubMed

    Dowaliby, F J

    1977-11-01

    Previous research on the psychology of confidence suggests that the more confident a rater is in his judgment the more accurate is his rating. The purpose of the present study was to investigate possible differences among raters in their confidence in competency ratings which they had provided. Results indicated significant differences due to the rater's interpersonal role with the ratee and the particular aspect of competence rated. Greater simple structure of competence ratings when adjusted for rater confidence is also shown. Rater confidence is discussed as an index for rater selection and as a moderator variable for competence ratings.

  8. Confidence-based somatic mutation evaluation and prioritization.

    PubMed

    Löwer, Martin; Renard, Bernhard Y; de Graaf, Jos; Wagner, Meike; Paret, Claudia; Kneip, Christoph; Türeci, Ozlem; Diken, Mustafa; Britten, Cedrik; Kreiter, Sebastian; Koslowski, Michael; Castle, John C; Sahin, Ugur

    2012-01-01

    Next generation sequencing (NGS) has enabled high throughput discovery of somatic mutations. Detection depends on experimental design, lab platforms, parameters and analysis algorithms. However, NGS-based somatic mutation detection is prone to erroneous calls, with reported validation rates near 54% and congruence between algorithms less than 50%. Here, we developed an algorithm to assign a single statistic, a false discovery rate (FDR), to each somatic mutation identified by NGS. This FDR confidence value accurately discriminates true mutations from erroneous calls. Using sequencing data generated from triplicate exome profiling of C57BL/6 mice and B16-F10 melanoma cells, we used the existing algorithms GATK, SAMtools and SomaticSNiPer to identify somatic mutations. For each identified mutation, our algorithm assigned an FDR. We selected 139 mutations for validation, including 50 somatic mutations assigned a low FDR (high confidence) and 44 mutations assigned a high FDR (low confidence). All of the high confidence somatic mutations validated (50 of 50), none of the 44 low confidence somatic mutations validated, and 15 of 45 mutations with an intermediate FDR validated. Furthermore, the assignment of a single FDR to individual mutations enables statistical comparisons of lab and computation methodologies, including ROC curves and AUC metrics. Using the HiSeq 2000, single end 50 nt reads from replicates generate the highest confidence somatic mutation call set.

  9. Confidence in ASCI scientific simulations

    SciTech Connect

    Ang, J.A.; Trucano, T.G.; Luginbuhl, D.R.

    1998-06-01

    The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.

  10. A confidence paradigm for classification systems

    NASA Astrophysics Data System (ADS)

    Leap, Nathan J.; Bauer, Kenneth W., Jr.

    2008-04-01

    There is no universally accepted methodology to determine how much confidence one should have in a classifier output. This research proposes a framework to determine the level of confidence in an indication from a classifier system where the output is a measurement value. There are two types of confidence developed in this paper. The first is confidence in a classification system or classifier and is denoted classifier confidence. The second is the confidence in the output of a classification system or classifier. In this paradigm, we posit that the confidence in the output of a classifier should be, on average, equal to the confidence in the classifier as a whole (i.e., classifier confidence). The amount of confidence in a given classifier is estimated using multiattribute preference theory and forms the foundation for a quadratic confidence function that is applied to posterior probability estimates. Classifier confidence is currently determined based upon individual measurable value functions for classification accuracy, average entropy, and sample size, and the form of the overall measurable value function is multilinear based upon the assumption of weak difference independence. Using classifier confidence, a quadratic function is trained to be the confidence function which inputs a posterior probability and outputs the confidence in a given indication. In this paradigm, confidence is not equal to the posterior probability estimate but is related to it. This confidence measure is a direct link between traditional decision analysis techniques and traditional pattern recognition techniques. This methodology is applied to two real world data sets, and results show the sort of behavior that would be expected from a rational confidence measure.

  11. Meta-Analytic Interval Estimation for Standardized and Unstandardized Mean Differences

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected…

  12. A Mathematical Framework for Statistical Decision Confidence.

    PubMed

    Hangya, Balázs; Sanders, Joshua I; Kepecs, Adam

    2016-09-01

    Decision confidence is a forecast about the probability that a decision will be correct. From a statistical perspective, decision confidence can be defined as the Bayesian posterior probability that the chosen option is correct based on the evidence contributing to it. Here, we used this formal definition as a starting point to develop a normative statistical framework for decision confidence. Our goal was to make general predictions that do not depend on the structure of the noise or a specific algorithm for estimating confidence. We analytically proved several interrelations between statistical decision confidence and observable decision measures, such as evidence discriminability, choice, and accuracy. These interrelationships specify necessary signatures of decision confidence in terms of externally quantifiable variables that can be empirically tested. Our results lay the foundations for a mathematically rigorous treatment of decision confidence that can lead to a common framework for understanding confidence across different research domains, from human and animal behavior to neural representations. PMID:27391683

  13. Action-Specific Disruption of Perceptual Confidence

    PubMed Central

    Maniscalco, Brian; Ko, Yoshiaki; Amendi, Namema; Ro, Tony; Lau, Hakwan

    2015-01-01

    Theoretical models of perception assume that confidence is related to the quality or strength of sensory processing. Counter to this intuitive view, we showed in the present research that the motor system also contributes to judgments of perceptual confidence. In two experiments, we used transcranial magnetic stimulation (TMS) to manipulate response-specific representations in the premotor cortex, selectively disrupting postresponse confidence in visual discrimination judgments. Specifically, stimulation of the motor representation associated with the unchosen response reduced confidence in correct responses, thereby reducing metacognitive capacity without changing visual discrimination performance. Effects of TMS on confidence were observed when stimulation was applied both before and after the response occurred, which suggests that confidence depends on late-stage metacognitive processes. These results place constraints on models of perceptual confidence and metacognition by revealing that action-specific information in the premotor cortex contributes to perceptual confidence. PMID:25425059

  14. Judgment Confidence and Judgment Accuracy of Teachers in Judging Self-Concepts of Students

    ERIC Educational Resources Information Center

    Praetorius, Anna-Katharina; Berner, Valerie-Danielle; Zeinz, Horst; Scheunpflug, Annette; Dresel, Markus

    2013-01-01

    Accurate teacher judgments of student characteristics are considered to be important prerequisites for adaptive instruction. A theoretically important condition for putting these judgments into operation is judgment confidence. Using a German sample of 96 teachers and 1,388 students, the authors examined how confident teachers are in their…

  15. Reasons to have confidence in condoms.

    PubMed

    Mcneill, E T

    1998-01-01

    When used regularly and correctly, latex condoms are highly reliable and effective in preventing pregnancy and sexually transmitted disease. However, condoms are not being used as much as they should be, mainly because of negative perceptions among both users and health care providers. The following reasons are presented and discussed as to why people should have more confidence in latex condoms: when used correctly, condoms are highly reliable and effective in preventing pregnancy and sexually transmitted disease; latex condoms provide an impermeable mechanical barrier to bacteria, viruses, and sperm; most users do not break condoms, and a proportion of breakage is preventable; modern condoms are manufactured with considerable precision; use of the proper lubricant improves condom use; condoms in intact foil packages last at least 5 years; and quality control and post-production quality assurance help to ensure the manufacture of a reliable product. While it remains to be determined how accurately the test standards predict results during human use, a combination of tests can provide data upon the quality of condoms in the field.

  16. Adult age differences in the realism of confidence judgments: overconfidence, format dependence, and cognitive predictors.

    PubMed

    Hansson, Patrik; Rönnlund, Michael; Juslin, Peter; Nilsson, Lars-Göran

    2008-09-01

    Realistic confidence judgments are essential to everyday functioning, but few studies have addressed the issue of age differences in overconfidence. Therefore, the authors examined this issue with probability judgment and intuitive confidence intervals in a sample of 122 healthy adults (ages: 35-40, 55-60, 70-75 years). In line with predictions based on the naïve sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), substantial format dependence was observed, with extreme overconfidence when confidence was expressed as an intuitive confidence interval but not when confidence was expressed as a probability judgment. Moreover, an age-related increase in overconfidence was selectively observed when confidence was expressed as intuitive confidence intervals. Structural equation modeling indicated that the age-related increases in overconfidence were mediated by a general cognitive ability factor that may reflect executive processes. Finally, the results indicated that part of the negative influence of increased age on general ability may be compensated for by an age-related increase in domain-relevant knowledge. PMID:18808243

  17. The Asteroid Identification Problem. II. Target Plane Confidence Boundaries

    NASA Astrophysics Data System (ADS)

    Milani, Andrea; Valsecchi, Giovanni B.

    1999-08-01

    important. We apply this technique to discuss the recent case of asteroid 1997 XF11, which, on the basis of the observations available up to March 11, 1998, appeared to be on an orbit with a near miss of the Earth in 2028. Although the least squares solution had a close approach at 1/8 of the lunar distance, the linear confidence regions corresponding to acceptable size of the residuals are very elongated ellipses which do not include collision; this computation was reported by Chodas and Yeomans. In this paper, we compute the semilinear confidence boundaries and find that they agree with the results of the Monte Carlo method, but differ in a significant way from the linear ellipses, although the differences occur only far from the Earth. The use of the 1990 prediscovery observations has confirmed the impossibility of an impact in 2028 and reduces the semilinear confidence regions to subsets of the regions computed with less data, as expected. The confidence regions computed using the linear approximation, on the other hand, do not reduce to subsets of the regions computed with less data. We also discuss a simulated example (Bowell and Muinonen 1992, Bull. Am. Astron. Soc.24, 965) of an Earth-impacting asteroid. In this hypothetical case the semilinear confidence boundary has a completely different shape from the linear ellipse, and indeed for orbits determined with only few weeks of observational data the semilinear confidence boundary correctly includes possible collisions, while the linear one does not. Free software is available now, allowing everyone to compute target plane confidence boundaries as in this paper; in case a new asteroid with worrisome close approaches is discovered, our method allows to quickly perform an accurate risk assessment.

  18. Young, Black, and Anxious: Describing the Black Student Mathematics Anxiety Research Using Confidence Intervals

    ERIC Educational Resources Information Center

    Young, Jamaal Rashad; Young, Jemimah Lea

    2016-01-01

    In this article, the authors provide a single group summary using the Mathematics Anxiety Rating Scale (MARS) to characterize and delineate the measurement of mathematics anxiety (MA) reported among Black students. Two research questions are explored: (a) What are the characteristics of studies administering the MARS and its derivatives to…

  19. An alternative approach to confidence interval estimation for the win ratio statistic.

    PubMed

    Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann

    2015-03-01

    Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes.

  20. Applying Tests of Equivalence for Multiple Group Comparisons: Demonstration of the Confidence Interval Approach

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2011-01-01

    Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…

  1. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  2. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  3. Improving Content Validation Studies Using an Asymmetric Confidence Interval for the Mean of Expert Ratings

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Miller, Jeffrey M.

    2004-01-01

    As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

  4. Computing confidence intervals on solution costs for stochastic grid generation expansion problems.

    SciTech Connect

    Woodruff, David L..; Watson, Jean-Paul

    2010-12-01

    A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.

  5. Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife

    PubMed Central

    Wager, Stefan; Hastie, Trevor; Efron, Bradley

    2014-01-01

    We study the variability of predictions made by bagged learners and random forests, and show how to estimate standard errors for these methods. Our work builds on variance estimates for bagging proposed by Efron (1992, 2013) that are based on the jackknife and the infinitesimal jackknife (IJ). In practice, bagged predictors are computed using a finite number B of bootstrap replicates, and working with a large B can be computationally expensive. Direct applications of jackknife and IJ estimators to bagging require B = Θ(n1.5) bootstrap replicates to converge, where n is the size of the training set. We propose improved versions that only require B = Θ(n) replicates. Moreover, we show that the IJ estimator requires 1.7 times less bootstrap replicates than the jackknife to achieve a given accuracy. Finally, we study the sampling distributions of the jackknife and IJ variance estimates themselves. We illustrate our findings with multiple experiments and simulation studies. PMID:25580094

  6. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

    ERIC Educational Resources Information Center

    Capraro, Robert M.

    2004-01-01

    With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

  7. Bootstrap Standard Error and Confidence Intervals for the Difference between Two Squared Multiple Correlation Coefficients

    ERIC Educational Resources Information Center

    Chan, Wai

    2009-01-01

    A typical question in multiple regression analysis is to determine if a set of predictors gives the same degree of predictor power in two different populations. Olkin and Finn (1995) proposed two asymptotic-based methods for testing the equality of two population squared multiple correlations, [rho][superscript 2][subscript 1] and…

  8. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2016-01-01

    Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…

  9. Bootstrap Standard Error and Confidence Intervals for the Correlation Corrected for Range Restriction: A Simulation Study

    ERIC Educational Resources Information Center

    Chan, Wai; Chan, Daniel W.-L.

    2004-01-01

    The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ?, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, r-sub(c), has been recommended, and a standard formula based on asymptotic results for estimating its standard…

  10. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  11. Considering Teaching History and Calculating Confidence Intervals in Student Evaluations of Teaching Quality

    ERIC Educational Resources Information Center

    Fraile, Rubén; Bosch-Morell, Francisco

    2015-01-01

    Lecturer promotion and tenure decisions are critical both for university management and for the affected lecturers. Therefore, they should be made cautiously and based on reliable information. Student evaluations of teaching quality are among the most used and analysed sources of such information. However, to date little attention has been paid in…

  12. A recipe for the construction of confidence limits

    SciTech Connect

    Iain A Bertram et al.

    2000-04-12

    In this note, the authors present the recipe recommended by the Search Limits Committee for the construction of confidence intervals for the use of D0 collaboration. In another note, currently in preparation, they present the rationale for this recipe, a critique of the current literature on this topic, and several examples of the use of the method. This note is intended to fill the need of the collaboration to have a reference available until the more complete note is finished. Section 2 introduces the notation used in this note, and Section 3 contains the suggested recipe.

  13. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  14. How complete and accurate is meningococcal disease notification?

    PubMed

    Breen, E; Ghebrehewet, S; Regan, M; Thomson, A P J

    2004-12-01

    Effective public health control of meningococcal disease (meningococcal meningitis and septicaemia) is dependent on complete, accurate and speedy notification. Using capture-recapture techniques this study assesses the completeness, accuracy and timeliness of meningococcal notification in a health authority. The completeness of meningococcal disease notification was 94.8% (95% confidence interval 93.2% to 96.2%); 91.2% of cases in 2001 were notified within 24 hours of diagnosis, but 28.0% of notifications in 2001 were false positives. Clinical staff need to be aware of the public health implications of a notification of meningococcal disease, and of failure of, or delay in notification. Incomplete or delayed notification not only leads to inaccurate data collection but also means that important public health measures may not be taken. A clinical diagnosis of meningococcal disease should be carefully considered between the clinician and the consultant in communicable disease control (CCDC). Otherwise, prophylaxis may be given unnecessarily, disease incidence inflated, and the benefits of control measures underestimated. Consultants in communicable disease control (CCDCs), in conjunction with clinical staff, should de-notify meningococcal disease if the diagnosis changes.

  15. The effect of confidence and method of questioning on eyewitness testimony.

    PubMed

    Venter, A; Louw, D A

    2005-06-01

    Very often eyewitnesses are perceived as being accurate due to the confidence in the accuracy of their own testimony. The confidence displayed by an eyewitness may possibly be increased by the method of questioning used by legal professionals and police. The present study examines the confidence-accuracy relationship and the effect the method of questioning (open-ended versus closed-ended questions) may have on the confidence of eyewitnesses. The sample of 412 respondents consisted of scholars (11 to 14-year-olds), university students, the public and Police College students. A significant relationship between memory accuracy and confidence was found for more than 70% of the questions. Closed-ended questions provided a significantly higher rate of accuracy than open-ended questions. A significantly larger proportion of respondents to the closed-ended questions were more confident about their answers than those who responded to the open-ended questions. PMID:16082872

  16. Confidence in Parenting: Is Parent Education Working?

    ERIC Educational Resources Information Center

    Stanberry, J. Phillip; Stanberry, Anne M.

    This study examined parents' feelings of confidence in their parenting ability among 56 individuals enrolled in 5 parent education programs in Mississippi, hypothesizing that there would be significant correlations between personal authority in the family system and a parent's confidence in performing the various roles of parenting. Based on…

  17. Preservice Educators' Confidence in Addressing Sexuality Education

    ERIC Educational Resources Information Center

    Wyatt, Tammy Jordan

    2009-01-01

    This study examined 328 preservice educators' level of confidence in addressing four sexuality education domains and 21 sexuality education topics. Significant differences in confidence levels across the four domains were found for gender, academic major, sexuality education philosophy, and sexuality education knowledge. Preservice educators…

  18. Examining Response Confidence in Multiple Text Tasks

    ERIC Educational Resources Information Center

    List, Alexandra; Alexander, Patricia A.

    2015-01-01

    Students' confidence in their responses to a multiple text-processing task and their justifications for those confidence ratings were investigated. Specifically, 215 undergraduates responded to two academic questions, differing by type (i.e., discrete and open-ended) and by domain (i.e., developmental psychology and astrophysics), using a digital…

  19. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  20. Self-Confidence and Metacognitive Processes

    ERIC Educational Resources Information Center

    Kleitman, Sabina; Stankov, Lazar

    2007-01-01

    This paper examines the nature of the Self-confidence factor. In particular, we study the relationship between this factor and cognitive, metacognitive, and personality measures. Participants (N=296) were administered a battery of seven cognitive tests that assess three constructs: accuracy, speed, and confidence. Participants were also given the…

  1. Confidence and Competence with Mathematical Procedures

    ERIC Educational Resources Information Center

    Foster, Colin

    2016-01-01

    Confidence assessment (CA), in which students state alongside each of their answers a confidence level expressing how certain they are, has been employed successfully within higher education. However, it has not been widely explored with school pupils. This study examined how school mathematics pupils (N?=?345) in five different secondary schools…

  2. Confidence Wagering during Mathematics and Science Testing

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Hoan-Lin; Shymansky, James A.

    2009-01-01

    This proposal presents the results of a case study involving five 8th grade Taiwanese classes, two mathematics and three science classes. These classes used a new method of testing called confidence wagering. This paper advocates the position that confidence wagering can predict the accuracy of a student's test answer selection during…

  3. Similarity and confidence in artificial grammar learning.

    PubMed

    Tunney, Richard J

    2010-01-01

    Three experiments examined the relationship between similarity ratings and confidence ratings in artificial grammar learning. In Experiment 1 participants rated the similarity of test items to study exemplars. Regression analyses revealed these to be related to some of the objective measures of similarity that have previously been implicated in categorization decisions. In Experiment 2 participants made grammaticality decisions and rated either their confidence in the accuracy of their decisions or the similarity of the test items to the study items. Regression analyses showed that the grammaticality decisions were predicted by the similarity ratings obtained in Experiment 1. Points on the receiver operating characteristics (ROC) curves for the similarity and confidence ratings were closely matched. These data suggest that meta-cognitive judgments of confidence are predicated on structural knowledge of similarity. Experiment 3 confirmed this by showing that confidence ratings to median similarity probe items changed according to the similarity of preceding items.

  4. An informative confidence metric for ATR.

    SciTech Connect

    Bow, Wallace Johnston Jr.; Richards, John Alfred; Bray, Brian Kenworthy

    2003-03-01

    Automatic or assisted target recognition (ATR) is an important application of synthetic aperture radar (SAR). Most ATR researchers have focused on the core problem of declaration-that is, detection and identification of targets of interest within a SAR image. For ATR declarations to be of maximum value to an image analyst, however, it is essential that each declaration be accompanied by a reliability estimate or confidence metric. Unfortunately, the need for a clear and informative confidence metric for ATR has generally been overlooked or ignored. We propose a framework and methodology for evaluating the confidence in an ATR system's declarations and competing target hypotheses. Our proposed confidence metric is intuitive, informative, and applicable to a broad class of ATRs. We demonstrate that seemingly similar ATRs may differ fundamentally in the ability-or inability-to identify targets with high confidence.

  5. Linkage disequilibrium interval mapping of quantitative trait loci

    PubMed Central

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-01-01

    Background For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Results Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Conclusion Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates. PMID:16542433

  6. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  7. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  8. Updating misconceptions: effects of age and confidence.

    PubMed

    Cyr, Andrée-Ann; Anderson, Nicole D

    2013-06-01

    Young adults are more likely to correct an initial higher confidence error than a lower confidence error (Butterfield & Metcalfe, 2001). This hypercorrection effect has never been investigated among older adults, although features of the standard paradigm (free recall, metacognitive judgments) and prior evidence of age-related error resolution deficits (see Clare & Jones, 2008) suggest that they may not show this effect. In Study 1, we used free recall and a 7-point confidence scale; in Study 2, we used multiple-choice questions, and participants indicated how many alternatives they had narrowed their options down to prior to answering. In both studies, younger and older adults showed a hypercorrection effect, and this effect was equivalent between groups in Study 2 when free recall and explicit confidence ratings were not required. These results are consistent with our previous work (Cyr & Anderson, 2012) showing that older adults can successfully resolve learning errors when the learning context provides sufficient support.

  9. Confidence regions of planar cardiac vectors

    NASA Technical Reports Server (NTRS)

    Dubin, S.; Herr, A.; Hunt, P.

    1980-01-01

    A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.

  10. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  11. Confidence in leadership among the newly qualified.

    PubMed

    Bayliss-Pratt, Lisa; Morley, Mary; Bagley, Liz; Alderson, Steven

    2013-10-23

    The Francis report highlighted the importance of strong leadership from health professionals but it is unclear how prepared those who are newly qualified feel to take on a leadership role. We aimed to assess the confidence of newly qualified health professionals working in the West Midlands in the different competencies of the NHS Leadership Framework. Most respondents felt confident in their abilities to demonstrate personal qualities and work with others, but less so at managing or improving services or setting direction.

  12. Worse than enemies. The CEO's destructive confidant.

    PubMed

    Sulkowicz, Kerry J

    2004-02-01

    The CEO is often the most isolated and protected employee in the organization. Few leaders, even veteran CEOs, can do the job without talking to someone about their experiences, which is why most develop a close relationship with a trusted colleague, a confidant to whom they can tell their thoughts and fears. In his work with leaders, the author has found that many CEO-confidant relationships function very well. The confidants keep their leaders' best interests at heart. They derive their gratification vicariously, through the help they provide rather than through any personal gain, and they are usually quite aware that a person in their position can potentially abuse access to the CEO's innermost secrets. Unfortunately, almost as many confidants will end up hurting, undermining, or otherwise exploiting CEOs when the executives are at their most vulnerable. These confidants rarely make the headlines, but behind the scenes they do enormous damage to the CEO and to the organization as a whole. What's more, the leader is often the last one to know when or how the confidant relationship became toxic. The author has identified three types of destructive confidants. The reflector mirrors the CEO, constantly reassuring him that he is the "fairest CEO of them all." The insulator buffers the CEO from the organization, preventing critical information from getting in or out. And the usurper cunningly ingratiates himself with the CEO in a desperate bid for power. This article explores how the CEO-confidant relationship plays out with each type of adviser and suggests ways CEOs can avoid these destructive relationships.

  13. Interval arithmetic operations for uncertainty analysis with correlated interval variables

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Fu, Chun-Ming; Ni, Bing-Yu; Han, Xu

    2016-08-01

    A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analysis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional parallelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addition, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.

  14. Multichannel interval timer (MINT)

    SciTech Connect

    Kimball, K.B.

    1982-06-01

    A prototype Multichannel INterval Timer (MINT) has been built for measuring signal Time of Arrival (TOA) from sensors placed in blast environments. The MINT is intended to reduce the space, equipment costs, and data reduction efforts associated with traditional analog TOA recording methods, making it more practical to field the large arrays of TOA sensors required to characterize blast environments. This document describes the MINT design features, provides the information required for installing and operating the system, and presents proposed improvements for the next generation system.

  15. Correlations Redux: Asymptotic Confidence Limits for Partial and Squared Multiple Correlations.

    ERIC Educational Resources Information Center

    Graf, Richard G.; Alf, Edward F., Jr.

    1999-01-01

    I. Olkin and J. Finn (1995) developed expressions for confidence intervals for functions of simple, partial, and multiple correlations. Describes procedures and computer programs for solving these problems and extending the methods for any number of predictors or for partialing out any number of variables. (Author/SLD)

  16. Interval-valued random functions and the kriging of intervals

    SciTech Connect

    Diamond, P.

    1988-04-01

    Estimation procedures using data that include some values known to lie within certain intervals are usually regarded as problems of constrained optimization. A different approach is used here. Intervals are treated as elements of a positive cone, obeying the arithmetic of interval analysis, and positive interval-valued random functions are discussed. A kriging formalism for interval-valued data is developed. It provides estimates that are themselves intervals. In this context, the condition that kriging weights be positive is seen to arise in a natural way. A numerical example is given, and the extension to universal kriging is sketched.

  17. Notes on interval estimation of the gamma correlation under stratified random sampling.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2012-07-01

    We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. PMID:22622622

  18. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  19. The Confidence-Accuracy Relationship in Eyewitness Identification: Effects of Lineup Instructions, Foil Similarity, and Target-Absent Base Rates

    ERIC Educational Resources Information Center

    Brewer, Neil; Wells, Gary L.

    2006-01-01

    Discriminating accurate from mistaken eyewitness identifications is a major issue facing criminal justice systems. This study examined whether eyewitness confidence assists such decisions under a variety of conditions using a confidence-accuracy (CA) calibration approach. Participants (N = 1,200) viewed a simulated crime and attempted 2 separate…

  20. Confidence in biopreparedness authorities among Finnish conscripts.

    PubMed

    Vartti, Anne-Marie; Aro, Arja R; Jormanainen, Vesa; Henriksson, Markus; Nikkari, Simo

    2010-08-01

    A large sample of Finnish military conscripts of the armored brigade were questioned on the extent to which they trusted the information given biopreparedness authorities (such as the police, military, health care, and public health institutions) and how confident they were in the authority's ability to protect the public during a potential infectious disease outbreak, from either natural or deliberate causes. Participants answered a written questionnaire during their initial health inspection in July 2007. From a total of 1,000 conscripts, 953 male conscripts returned the questionnaire. The mean sum scores for confidence in the information given to biopreparedness authorities and the media on natural and bioterrorism-related outbreaks (range = 0-30) were 20.14 (SD = 7.79) and 20.12 (SD = 7.69), respectively. Mean sum scores for the respondents' confidence in the ability of the biopreparedness authorities to protect the public during natural and bioterrorism-related outbreaks (range 0-25) were 16.04 (SD = 5.78) and 16.17 (SD = 5.89). Most respondents indicated that during a natural outbreak, they would have confidence in information provided by a health care institution such as central hospitals and primary health care centers, whereas in the case of bioterrorism, the respondents indicated that they would have confidence in the defense forces and central hospitals. PMID:20731266

  1. Chiropractic Interns' Perceptions of Stress and Confidence

    PubMed Central

    Spegman, Adele Mattinat; Herrin, Sean

    2007-01-01

    Objective: Psychological stress has been shown to influence learning and performance among medical and graduate students. Few studies have examined psychological stress in chiropractic students and interns. This preliminary study explored interns' perceptions around stress and confidence at the midpoint of professional training. Methods: This pilot study used a mixed-methods approach, combining rating scales and modified qualitative methods, to explore interns' lived experience. Eighty-eight interns provided ratings of stress and confidence and narrative responses to broad questions. Results: Participants reported multiple sources of stress; stress and confidence ratings were inversely related. Interns described stress as forced priorities, inadequate time, and perceptions of weak performance. Two themes, “convey respect” and “guide real-world learning,” describe faculty actions that minimized stress and promoted confidence. Conclusion: Chiropractic interns experience varying degrees of stress, which is managed with diverse strategies. The development of confidence appears to be influenced by the consistency and manner in which feedback is provided. Although faculty cannot control the amount or sources of stress, awareness of interns' perceptions can strengthen our effectiveness as educators. PMID:18483584

  2. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  3. Adaptive Confidence Bands for Nonparametric Regression Functions

    PubMed Central

    Cai, T. Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661

  4. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    PubMed

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format.

  5. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    PubMed

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PMID:24188335

  6. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  7. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas.

    PubMed

    Flint, Mark; Matthews, Beren J; Limpus, Colin J; Mills, Paul C

    2015-01-01

    Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65-97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11-50%)], pre-albumin [two of five, 40% (95% CI 5-85%)], albumin [13 of 22, 59% (95% CI 36-79%)], total albumin [13 of 22, 59% (95% CI 36-79%)], α- [10 of 22, 45% (95% CI 24-68%)], β- [two of 10, 20% (95% CI 3-56%)], γ- [one of 10, 10% (95% CI 0.3-45%)] and β-γ-globulin [one of 12, 8% (95% CI 0.2-38%)] and total globulin [five of 22, 23% (8-45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle.

  8. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas.

    PubMed

    Flint, Mark; Matthews, Beren J; Limpus, Colin J; Mills, Paul C

    2015-01-01

    Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65-97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11-50%)], pre-albumin [two of five, 40% (95% CI 5-85%)], albumin [13 of 22, 59% (95% CI 36-79%)], total albumin [13 of 22, 59% (95% CI 36-79%)], α- [10 of 22, 45% (95% CI 24-68%)], β- [two of 10, 20% (95% CI 3-56%)], γ- [one of 10, 10% (95% CI 0.3-45%)] and β-γ-globulin [one of 12, 8% (95% CI 0.2-38%)] and total globulin [five of 22, 23% (8-45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle. PMID:27293722

  9. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas

    PubMed Central

    Flint, Mark; Matthews, Beren J.; Limpus, Colin J.; Mills, Paul C.

    2015-01-01

    Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65–97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11–50%)], pre-albumin [two of five, 40% (95% CI 5–85%)], albumin [13 of 22, 59% (95% CI 36–79%)], total albumin [13 of 22, 59% (95% CI 36–79%)], α- [10 of 22, 45% (95% CI 24–68%)], β- [two of 10, 20% (95% CI 3–56%)], γ- [one of 10, 10% (95% CI 0.3–45%)] and β–γ-globulin [one of 12, 8% (95% CI 0.2–38%)] and total globulin [five of 22, 23% (8–45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle. PMID:27293722

  10. Current Developments in Measuring Academic Behavioural Confidence

    ERIC Educational Resources Information Center

    Sander, Paul

    2009-01-01

    Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…

  11. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  12. The Confidence Factor in Liberal Education

    ERIC Educational Resources Information Center

    Gordon, Daniel

    2012-01-01

    With the US unemployment rate at 9 percent, it's rational for college students to lose confidence in the liberal arts and to opt for a vocational major. Or is it? There is a compelling economic case for the liberal arts. Against those who call for more professional training, liberal educators should concede nothing. However, they do have a…

  13. Sources of Confidence in School Community Councils

    ERIC Educational Resources Information Center

    Nygaard, Richard

    2010-01-01

    Three Utah middle level school community councils participated in a qualitative strengths-based process evaluation. Two of the school community councils were identified as exemplary, and the third was just beginning to function. One aspect of the evaluation was the source of school community council members' confidence. Each school had unique…

  14. Evaluating Measures of Optimism and Sport Confidence

    ERIC Educational Resources Information Center

    Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

    2016-01-01

    The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

  15. Accurate source location from P waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  16. Accurate source location from waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  17. The Effect of Adaptive Confidence Strategies in Computer-Assisted Instruction on Learning and Learner Confidence

    ERIC Educational Resources Information Center

    Warren, Richard Daniel

    2012-01-01

    The purpose of this research was to investigate the effects of including adaptive confidence strategies in instructionally sound computer-assisted instruction (CAI) on learning and learner confidence. Seventy-one general educational development (GED) learners recruited from various GED learning centers at community colleges in the southeast United…

  18. Random selection as a confidence building tool

    SciTech Connect

    Macarthur, Duncan W; Hauck, Danielle; Langner, Diana; Thron, Jonathan; Smith, Morag; Williams, Richard

    2010-01-01

    Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. The first concern can be addressed by performing the measurements within the host facility using instruments under the host's control. Because the data output in this measurement scenario is also under host control, it is difficult for the monitoring party to have confidence in that data. One technique for addressing this difficulty is random selection. The concept of random selection can be thought of as four steps: (1) The host presents several 'identical' copies of a component or system to the monitor. (2) One (or more) of these copies is randomly chosen by the monitors for use in the measurement system. (3) Similarly, one or more is randomly chosen to be validated further at a later date in a monitor-controlled facility. (4) Because the two components or systems are identical, validation of the 'validation copy' is equivalent to validation of the measurement system. This procedure sounds straightforward, but effective application may be quite difficult. Although random selection is often viewed as a panacea for confidence building, the amount of confidence generated depends on the monitor's continuity of knowledge for both validation and measurement systems. In this presentation, we will discuss the random selection technique, as well as where and how this technique might be applied to generate maximum confidence. In addition, we will discuss the role of modular measurement-system design in facilitating random selection and describe a simple modular measurement system incorporating six small {sup 3}He neutron detectors and a single high-purity germanium gamma detector.

  19. Is photometry an accurate and reliable method to assess boar semen concentration?

    PubMed

    Camus, A; Camugli, S; Lévêque, C; Schmitt, E; Staub, C

    2011-02-01

    Sperm concentration assessment is a key point to insure appropriate sperm number per dose in species subjected to artificial insemination (AI). The aim of the present study was to evaluate the accuracy and reliability of two commercially available photometers, AccuCell™ and AccuRead™ pre-calibrated for boar semen in comparison to UltiMate™ boar version 12.3D, NucleoCounter SP100 and Thoma hemacytometer. For each type of instrument, concentration was measured on 34 boar semen samples in quadruplicate and agreement between measurements and instruments were evaluated. Accuracy for both photometers was illustrated by mean of percentage differences to the general mean. It was -0.6% and 0.5% for Accucell™ and Accuread™ respectively, no significant differences were found between instrument and mean of measurement among all equipment. Repeatability for both photometers was 1.8% and 3.2% for AccuCell™ and AccuRead™ respectively. Low differences were observed between instruments (confidence interval 3%) except when hemacytometer was used as a reference. Even though hemacytometer is considered worldwide as the gold standard, it is the more variable instrument (confidence interval 7.1%). The conclusion is that routine photometry measures of raw semen concentration are reliable, accurate and precise using AccuRead™ or AccuCell™. There are multiple steps in semen processing that can induce sperm loss and therefore increase differences between theoretical and real sperm numbers in doses. Potential biases that depend on the workflow but not on the initial photometric measure of semen concentration are discussed.

  20. Equidistant Intervals in Perspective Photographs and Paintings

    PubMed Central

    2016-01-01

    Human vision is extremely sensitive to equidistance of spatial intervals in the frontal plane. Thresholds for spatial equidistance have been extensively measured in bisecting tasks. Despite the vast number of studies, the informational basis for equidistance perception is unknown. There are three possible sources of information for spatial equidistance in pictures, namely, distances in the picture plane, in physical space, and visual space. For each source, equidistant intervals were computed for perspective photographs of walls and canals. Intervals appear equidistant if equidistance is defined in visual space. Equidistance was further investigated in paintings of perspective scenes. In appraisals of the perspective skill of painters, emphasis has been on accurate use of vanishing points. The current study investigated the skill of painters to depict equidistant intervals. Depicted rows of equidistant columns, tiles, tapestries, or trees were analyzed in 30 paintings and engravings. Computational analysis shows that from the middle ages until now, artists either represented equidistance in physical space or in a visual space of very limited depth. Among the painters and engravers who depict equidistance in a highly nonveridical visual space are renowned experts of linear perspective.

  1. Equidistant Intervals in Perspective Photographs and Paintings

    PubMed Central

    2016-01-01

    Human vision is extremely sensitive to equidistance of spatial intervals in the frontal plane. Thresholds for spatial equidistance have been extensively measured in bisecting tasks. Despite the vast number of studies, the informational basis for equidistance perception is unknown. There are three possible sources of information for spatial equidistance in pictures, namely, distances in the picture plane, in physical space, and visual space. For each source, equidistant intervals were computed for perspective photographs of walls and canals. Intervals appear equidistant if equidistance is defined in visual space. Equidistance was further investigated in paintings of perspective scenes. In appraisals of the perspective skill of painters, emphasis has been on accurate use of vanishing points. The current study investigated the skill of painters to depict equidistant intervals. Depicted rows of equidistant columns, tiles, tapestries, or trees were analyzed in 30 paintings and engravings. Computational analysis shows that from the middle ages until now, artists either represented equidistance in physical space or in a visual space of very limited depth. Among the painters and engravers who depict equidistance in a highly nonveridical visual space are renowned experts of linear perspective. PMID:27698983

  2. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  3. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  4. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  5. Explicit and Implicit Confidence Judgments and Developmental Differences in Metamemory: An Eye-Tracking Approach

    ERIC Educational Resources Information Center

    Roderer, Thomas; Roebers, Claudia M.

    2010-01-01

    In the present study, primary school children's ability to give accurate confidence judgments (CJ) was addressed, with a special focus on uncertainty monitoring. In order to investigate the effects of memory retrieval processes on monitoring judgments, item difficulty in a vocabulary learning task (Japanese symbols) was manipulated. Moreover, as a…

  6. A variance based confidence criterion for ERA identified modal parameters. [Eigensystem Realization Algorithm

    NASA Technical Reports Server (NTRS)

    Longman, Richard W.; Juang, Jer-Nan

    1988-01-01

    The realization theory is developed in a systematic manner for the Eigensystem Realization Algorithm (ERA) used for system identification. First, perturbation results are obtained which describe the linearized changes in the identified parameters resulting from small change in the data. Formulas are then derived that can be used to evaluate the variance of each of the identified parameters, assuming that the noise level is sufficiently low to allow the application of linearized results. These variances can be converted to give confidence intervals for each of the parameters for any chosen confidence level.

  7. Determining frequentist confidence limits using a directed parameter space search

    SciTech Connect

    Daniel, Scott F.; Connolly, Andrew J.; Schneider, Jeff

    2014-10-10

    We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the seven yr Wilkinson Microwave Anisotropy Probe cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.

  8. Measuring Intuition: Nonconscious Emotional Information Boosts Decision Accuracy and Confidence.

    PubMed

    Lufityanto, Galang; Donkin, Chris; Pearson, Joel

    2016-05-01

    The long-held popular notion of intuition has garnered much attention both academically and popularly. Although most people agree that there is such a phenomenon as intuition, involving emotionally charged, rapid, unconscious processes, little compelling evidence supports this notion. Here, we introduce a technique in which subliminal emotional information is presented to subjects while they make fully conscious sensory decisions. Our behavioral and physiological data, along with evidence-accumulator models, show that nonconscious emotional information can boost accuracy and confidence in a concurrent emotion-free decision task, while also speeding up response times. Moreover, these effects were contingent on the specific predictive arrangement of the nonconscious emotional valence and motion direction in the decisional stimulus. A model that simultaneously accumulates evidence from both physiological skin conductance and conscious decisional information provides an accurate description of the data. These findings support the notion that nonconscious emotions can bias concurrent nonemotional behavior-a process of intuition.

  9. Confidence as Bayesian Probability: From Neural Origins to Behavior.

    PubMed

    Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F

    2015-10-01

    Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. PMID:26447574

  10. On the Confidence Limit of Hilbert Spectrum

    NASA Technical Reports Server (NTRS)

    Huang, Norden

    2003-01-01

    Confidence limit is a routine requirement for Fourier spectral analysis. But this confidence limit is established based on ergodic theory: For stationary process, temporal average equals the ensemble average. Therefore, one can divide the data into n-sections and treat each section as independent realization. Most natural processes in general, and climate data in particular, are not stationary; therefore, there is a need for the Hilbert Spectral analysis for such processes. Here ergodic theory is no longer applicable. We propose to use various adjustable parameters in the shifting processes of the Empirical Mode Decomposition (EMD) method to obtain an ensemble of Intrinsic Mode Function 0 sets. Based on such an ensemble, we introduce a statistical measure in. a form of confidence limits for the Intrinsic Mode Functions, and consequently, the Hilbert spectra. The criterion of selecting the various adjustable parameters is based on the orthogonality test of the resulting M F sets. Length-of-day data from 1962 to 2001 will be used to illustrate this new approach. Its implication in climate data analysis will also be discussed.

  11. Confidence-Based Learning in Investment Analysis

    NASA Astrophysics Data System (ADS)

    Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

    The aim of this study is to determine the effectiveness of using multiple choice tests in subjects related to the administration and business management. To this end we used a multiple-choice test with specific questions to verify the extent of knowledge gained and the confidence and trust in the answers. The tests were performed in a group of 200 students at the bachelor's degree in Business Administration and Management. The analysis made have been implemented in one subject of the scope of investment analysis and measured the level of knowledge gained and the degree of trust and security in the responses at two different times of the course. The measurements have been taken into account different levels of difficulty in the questions asked and the time spent by students to complete the test. The results confirm that students are generally able to obtain more knowledge along the way and get increases in the degree of trust and confidence in the answers. It is confirmed as the difficulty level of the questions set a priori by the heads of the subjects are related to levels of security and confidence in the answers. It is estimated that the improvement in the skills learned is viewed favourably by businesses and are especially important for job placement of students.

  12. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  13. Horizontal velocities in the central and eastern United States from GPS surveys during the 1987-1996 interval

    SciTech Connect

    Snay, R.A.; Strange, W.E.

    1997-12-01

    The National Geodetic Survey and the Nuclear Regulatory Commission jointly organized GPS surveys in 1987, 1990, 1993, and 1996 to search for crustal deformation in the central and eastern United States (east of longitude 108{degrees}W). We have analyzed the data of these four surveys in combination with VLBI data observed during the 1979-1995 interval and GPS data for 22 additional surveys observed during the 1990-1996 interval. These latter GPS surveys served to establish accurately positioned geodetic marks in various states. Accordingly, we have computed horizontal velocities for 64 GPS sites and 12 VLBI sites relative to a reference frame for which the interior of the North American plate is considered fixed on average. None of our derived velocities exceeds 6 mm/yr in magnitude. Moreover, the derived velocity at each GPS site is statistically zero at the 95% confidence level except for the site BOLTON in central Ohio and the site BEARTOWN in southeastern Pennsylvania. However, as statistical theory would allow approximately 5% of the 64 GPS sites to fall our zero-velocity hypothesis, we are uncertain whether or not these estimated velocities for BOLTON and BEARTOWN reflect actual motion relative to the North American plate. We also computed horizontal strain rates for the cells formed by a 1{degrees} by 1{degrees} grid spanning the central and eastern United States. Corresponding shearing rates are everywhere less than 60 nanoradians/yr in magnitude, and no shearing rate differs statistically from zero at the 95% confidence level except for a grid cell near BEARTOWN whose rate is 57 {+-} 26 nanoradians/yr. Also corresponding areal dilatation rates are everywhere less than 40 nanostrain/yr in magnitude, and no dilatation rate differs statistically from zero at the 95% confidence level.

  14. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  15. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  16. The Confidence Information Ontology: a step towards a standard for asserting confidence in annotations.

    PubMed

    Bastian, Frederic B; Chibucos, Marcus C; Gaudet, Pascale; Giglio, Michelle; Holliday, Gemma L; Huang, Hong; Lewis, Suzanna E; Niknejad, Anne; Orchard, Sandra; Poux, Sylvain; Skunca, Nives; Robinson-Rechavi, Marc

    2015-01-01

    Biocuration has become a cornerstone for analyses in biology, and to meet needs, the amount of annotations has considerably grown in recent years. However, the reliability of these annotations varies; it has thus become necessary to be able to assess the confidence in annotations. Although several resources already provide confidence information about the annotations that they produce, a standard way of providing such information has yet to be defined. This lack of standardization undermines the propagation of knowledge across resources, as well as the credibility of results from high-throughput analyses. Seeded at a workshop during the Biocuration 2012 conference, a working group has been created to address this problem. We present here the elements that were identified as essential for assessing confidence in annotations, as well as a draft ontology--the Confidence Information Ontology--to illustrate how the problems identified could be addressed. We hope that this effort will provide a home for discussing this major issue among the biocuration community. Tracker URL: https://github.com/BgeeDB/confidence-information-ontology Ontology URL: https://raw.githubusercontent.com/BgeeDB/confidence-information-ontology/master/src/ontology/cio-simple.obo

  17. Order and chaos in fixed-interval schedules of reinforcement

    PubMed Central

    Hoyert, Mark S.

    1992-01-01

    Fixed-interval schedule performance is characterized by high levels of variability. Responding is absent at the onset of the interval and gradually increases in frequency until reinforcer delivery. Measures of behavior also vary drastically and unpredictably between successive intervals. Recent advances in the study of nonlinear dynamics have allowed researchers to study irregular and unpredictable behavior in a number of fields. This paper reviews several concepts and techniques from nonlinear dynamics and examines their utility in predicting the behavior of pigeons responding to a fixed-interval schedule of reinforcement. The analysis provided fairly accurate a priori accounts of response rates, accounting for 92.8% of the variance when predicting response rate 1 second in the future and 64% of the variance when predicting response rates for each second over the entire next interreinforcer interval. The nonlinear dynamics account suggests that even the “noisiest” behavior might be the product of purely deterministic mechanisms. PMID:16812657

  18. Contraceptive confidence and timing of first birth in Moldova: an event history analysis of retrospective data

    PubMed Central

    Lyons-Amos, Mark; Padmadas, Sabu S; Durrant, Gabriele B

    2014-01-01

    Objectives To test the contraceptive confidence hypothesis in a modern context. The hypothesis is that women using effective or modern contraceptive methods have increased contraceptive confidence and hence a shorter interval between marriage and first birth than users of ineffective or traditional methods. We extend the hypothesis to incorporate the role of abortion, arguing that it acts as a substitute for contraception in the study context. Setting Moldova, a country in South-East Europe. Moldova exhibits high use of traditional contraceptive methods and abortion compared with other European countries. Participants Data are from a secondary analysis of the 2005 Moldovan Demographic and Health Survey, a nationally representative sample survey. 5377 unmarried women were selected. Primary and secondary outcome measures The outcome measure was the interval between marriage and first birth. This was modelled using a piecewise-constant hazard regression, with abortion and contraceptive method types as primary variables along with relevant sociodemographic controls. Results Women with high contraceptive confidence (modern method users) have a higher cumulative hazard of first birth 36 months following marriage (0.88 (0.87 to 0.89)) compared with women with low contraceptive confidence (traditional method users, cumulative hazard: 0.85 (0.84 to 0.85)). This is consistent with the contraceptive confidence hypothesis. There is a higher cumulative hazard of first birth among women with low (0.80 (0.79 to 0.80)) and moderate abortion propensities (0.76 (0.75 to 0.77)) than women with no abortion propensity (0.73 (0.72 to 0.74)) 24 months after marriage. Conclusions Effective contraceptive use tends to increase contraceptive confidence and is associated with a shorter interval between marriage and first birth. Increased use of abortion also tends to increase contraceptive confidence and shorten birth duration, although this effect is non-linear—women with a very high

  19. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  20. Engineering Student Self-Assessment through Confidence-Based Scoring

    ERIC Educational Resources Information Center

    Yuen-Reed, Gigi; Reed, Kyle B.

    2015-01-01

    A vital aspect of an answer is the confidence that goes along with it. Misstating the level of confidence one has in the answer can have devastating outcomes. However, confidence assessment is rarely emphasized during typical engineering education. The confidence-based scoring method described in this study encourages students to both think about…

  1. European security, nuclear weapons and public confidence

    SciTech Connect

    Gutteridge, W.

    1982-01-01

    This book presents papers on nuclear arms control in Europe. Topics considered include political aspects, the balance of power, nuclear disarmament in Europe, the implications of new conventional technologies, the neutron bomb, theater nuclear weapons, arms control in Northern Europe, naval confidence-building measures in the Baltic, the strategic balance in the Arctic Ocean, Arctic resources, threats to European stability, developments in South Africa, economic cooperation in Europe, European collaboration in science and technology after Helsinki, European cooperation in the area of electric power, and economic cooperation as a factor for the development of European security and cooperation.

  2. Confidence and conflicts of duty in surgery.

    PubMed

    Coggon, John; Wheeler, Robert

    2010-03-01

    This paper offers an exploration of the right to confidentiality, considering the moral importance of private information. It is shown that the legitimate value that individuals derive from confidentiality stems from the public interest. It is re-assuring, therefore, that public interest arguments must be made to justify breaches of confidentiality. The General Medical Council's guidance gives very high importance to duties to maintain confidences, but also rightly acknowledges that, at times, there are more important duties that must be met. Nevertheless, this potential conflict of obligations may place the surgeon in difficult clinical situations, and examples of these are described, together with suggestions for resolution. PMID:20353640

  3. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  4. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines

    PubMed Central

    Gilkey, Melissa B.; McRee, Annie-Laurie; Magnus, Brooke E.; Reiter, Paul L.; Dempsey, Amanda F.; Brewer, Noel T.

    2016-01-01

    Objective To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. Methods We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children’s vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. Results A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54–0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76–0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40–1.68), varicella (OR = 1.54, 95% CI, 1.42–1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23–1.42). Conclusions Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children. PMID:27391098

  5. Multiple interval mapping for quantitative trait loci.

    PubMed Central

    Kao, C H; Zeng, Z B; Teasdale, R D

    1999-01-01

    A new statistical method for mapping quantitative trait loci (QTL), called multiple interval mapping (MIM), is presented. It uses multiple marker intervals simultaneously to fit multiple putative QTL directly in the model for mapping QTL. The MIM model is based on Cockerham's model for interpreting genetic parameters and the method of maximum likelihood for estimating genetic parameters. With the MIM approach, the precision and power of QTL mapping could be improved. Also, epistasis between QTL, genotypic values of individuals, and heritabilities of quantitative traits can be readily estimated and analyzed. Using the MIM model, a stepwise selection procedure with likelihood ratio test statistic as a criterion is proposed to identify QTL. This MIM method was applied to a mapping data set of radiata pine on three traits: brown cone number, tree diameter, and branch quality scores. Based on the MIM result, seven, six, and five QTL were detected for the three traits, respectively. The detected QTL individually contributed from approximately 1 to 27% of the total genetic variation. Significant epistasis between four pairs of QTL in two traits was detected, and the four pairs of QTL contributed approximately 10.38 and 14.14% of the total genetic variation. The asymptotic variances of QTL positions and effects were also provided to construct the confidence intervals. The estimated heritabilities were 0.5606, 0.5226, and 0. 3630 for the three traits, respectively. With the estimated QTL effects and positions, the best strategy of marker-assisted selection for trait improvement for a specific purpose and requirement can be explored. The MIM FORTRAN program is available on the worldwide web (http://www.stat.sinica.edu.tw/chkao/). PMID:10388834

  6. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  7. Towards Measurement of Confidence in Safety Cases

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Paim Ganesh J.; Habli, Ibrahim

    2011-01-01

    Arguments in safety cases are predominantly qualitative. This is partly attributed to the lack of sufficient design and operational data necessary to measure the achievement of high-dependability targets, particularly for safety-critical functions implemented in software. The subjective nature of many forms of evidence, such as expert judgment and process maturity, also contributes to the overwhelming dependence on qualitative arguments. However, where data for quantitative measurements is systematically collected, quantitative arguments provide far more benefits over qualitative arguments, in assessing confidence in the safety case. In this paper, we propose a basis for developing and evaluating integrated qualitative and quantitative safety arguments based on the Goal Structuring Notation (GSN) and Bayesian Networks (BN). The approach we propose identifies structures within GSN-based arguments where uncertainties can be quantified. BN are then used to provide a means to reason about confidence in a probabilistic way. We illustrate our approach using a fragment of a safety case for an unmanned aerial system and conclude with some preliminary observations

  8. Exploring determinants of surrogate decision-maker confidence: an example from the ICU.

    PubMed

    Bolcic-Jankovic, Dragana; Clarridge, Brian R; LeBlanc, Jessica L; Mahmood, Rumel S; Roman, Anthony M; Freeman, Bradley D

    2014-10-01

    This article is an exploratory data analysis of the determinants of confidence in a surrogate decision maker who has been asked to permit an intensive care unit (ICU) patient's participation in genetic research. We pursue the difference between surrogates' and patients' confidence that the surrogate can accurately represent the patient's wishes. The article also explores whether greater confidence leads to greater agreement between patients and surrogates. Our data come from a survey conducted in three hospital ICUs. We interviewed 445 surrogates and 214 patients. The only thing that influences patients' confidence in their surrogate's decision is whether they had prior discussions with him or her; however, there are more influences operating on the surrogate's self-confidence. More confident surrogates are more likely to match their patients' wishes. Patients are more likely to agree to research participation than their surrogates would allow. The surrogates whose response did not match as closely were less trusting of the hospital staff, were less likely to allow patient participation if there were no direct benefits to the patient, had given less thought about the way genetic research is conducted, and were much less likely to have a person in their life who they would trust to make decisions for them if they were incapacitated. PMID:25747298

  9. Confidence and rejection in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Colton, Larry Don

    Automatic speech recognition (ASR) is performed imperfectly by computers. For some designated part (e.g., word or phrase) of the ASR output, rejection is deciding (yes or no) whether it is correct, and confidence is the probability (0.0 to 1.0) of it being correct. This thesis presents new methods of rejecting errors and estimating confidence for telephone speech. These are also called word or utterance verification and can be used in wordspotting or voice-response systems. Open-set or out-of-vocabulary situations are a primary focus. Language models are not considered. In vocabulary-dependent rejection all words in the target vocabulary are known in advance and a strategy can be developed for confirming each word. A word-specific artificial neural network (ANN) is shown to discriminate well, and scores from such ANNs are shown on a closed-set recognition task to reorder the N-best hypothesis list (N=3) for improved recognition performance. Segment-based duration and perceptual linear prediction (PLP) features are shown to perform well for such ANNs. The majority of the thesis concerns vocabulary- and task-independent confidence and rejection based on phonetic word models. These can be computed for words even when no training examples of those words have been seen. New techniques are developed using phoneme ranks instead of probabilities in each frame. These are shown to perform as well as the best other methods examined despite the data reduction involved. Certain new weighted averaging schemes are studied but found to give no performance benefit. Hierarchical averaging is shown to improve performance significantly: frame scores combine to make segment (phoneme state) scores, which combine to make phoneme scores, which combine to make word scores. Use of intermediate syllable scores is shown to not affect performance. Normalizing frame scores by an average of the top probabilities in each frame is shown to improve performance significantly. Perplexity of the wrong

  10. Anaesthetists' knowledge of the QT interval in a teaching hospital.

    PubMed

    Marshall, S D; Myles, P S

    2005-12-01

    Many drugs used in anaesthesia may prolong the QT interval of the electrocardiogram (ECG), and recent U.S. Food and Drug Administration guidelines mandate monitoring of the ECG before, during and after droperidol administration. We surveyed 41 trainee and consultant anaesthetists in our Department to determine current practice and knowledge of the QT interval to investigate if this is a practical proposition. A response rate of 98% (40/41) was obtained. The majority of respondents expressed moderate to high levels of confidence in interpreting the ECG, and this was related to years of training (rho 0.36, P=0.024). A total of 27 respondents (65%) were able to correctly identify the QT interval on a schematic representation of the ECG, trainees 70% vs consultants 60%, P=0.51. When asked to name drugs that altered the QT interval, droperidol was included by 11 of the 40 respondents (28%); trainees 10% vs consultants 45%, OR 7.4 (95% CI: 1.3-41), P=0.013. Torsades de Pointes was correctly identified as a possible consequence of a prolonged QT interval by 65% of trainees and 70% of consultants, P=0.83. The results suggest that QT interval measurement is not widely practised by anaesthetists, although its clinical significance is well known, and interpretation would be unreliable without further education.

  11. Modal confidence factor in vibration testing

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. R.

    1978-01-01

    The modal confidence factor (MCF) is a number calculated for every identified mode for a structure under test. The MCF varies from 0.00 for a distorted nonlinear, or noise mode to 100.0 for a pure structural mode. The theory of the MCF is based on the correlation that exists between the modal deflection at a certain station and the modal deflection at the same station delayed in time. The theory and application of the MCF are illustrated by two experiments. The first experiment deals with simulated responses from a two-degree-of-freedom system with 20%, 40%, and 100% noise added. The second experiment was run on a generalized payload model. The free decay response from the payload model contained 22% noise.

  12. Sample sizes for confidence limits for reliability.

    SciTech Connect

    Darby, John L.

    2010-02-01

    We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.

  13. Simple Interval Timers for Microcomputers.

    ERIC Educational Resources Information Center

    McInerney, M.; Burgess, G.

    1985-01-01

    Discusses simple interval timers for microcomputers, including (1) the Jiffy clock; (2) CPU count timers; (3) screen count timers; (4) light pen timers; and (5) chip timers. Also examines some of the general characteristics of all types of timers. (JN)

  14. The 2009 Retirement Confidence Survey: economy drives confidence to record lows; many looking to work longer.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2009-04-01

    RECORD LOW CONFIDENCE LEVELS: Workers who say they are very confident about having enough money for a comfortable retirement this year hit the lowest level in 2009 (13 percent) since the Retirement Confidence Survey started asking the question in 1993, continuing a two-year decline. Retirees also posted a new low in confidence about having a financially secure retirement, with only 20 percent now saying they are very confident (down from 41 percent in 2007). THE ECONOMY, INFLATION, COST OF LIVING ARE THE BIG CONCERNS: Not surprisingly, workers overall who have lost confidence over the past year about affording a comfortable retirement most often cite the recent economic uncertainty, inflation, and the cost of living as primary factors. In addition, certain negative experiences, such as job loss or a pay cut, loss of retirement savings, or an increase in debt, almost always contribute to loss of confidence among those who experience them. RETIREMENT EXPECTATIONS DELAYED: Workers apparently expect to work longer because of the economic downturn: 28 percent of workers in the 2009 RCS say the age at which they expect to retire has changed in the past year. Of those, the vast majority (89 percent) say that they have postponed retirement with the intention of increasing their financial security. Nevertheless, the median (mid-point) worker expects to retire at age 65, with 21 percent planning to push on into their 70s. The median retiree actually retired at age 62, and 47 percent of retirees say they retired sooner than planned. WORKING IN RETIREMENT: More workers are also planning to supplement their income in retirement by working for pay. The percentage of workers planning to work after they retire has increased to 72 percent in 2009 (up from 66 percent in 2007). This compares with 34 percent of retirees who report they actually worked for pay at some time during their retirement. GREATER WORRY ABOUT BASIC AND HEALTH EXPENSES: Workers who say they very confident in

  15. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  16. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  17. Statistical confidence estimation for Hi-C data reveals regulatory chromatin contacts

    PubMed Central

    Ay, Ferhat; Bailey, Timothy L.; Noble, William Stafford

    2014-01-01

    Our current understanding of how DNA is packed in the nucleus is most accurate at the fine scale of individual nucleosomes and at the large scale of chromosome territories. However, accurate modeling of DNA architecture at the intermediate scale of ∼50 kb–10 Mb is crucial for identifying functional interactions among regulatory elements and their target promoters. We describe a method, Fit-Hi-C, that assigns statistical confidence estimates to mid-range intra-chromosomal contacts by jointly modeling the random polymer looping effect and previously observed technical biases in Hi-C data sets. We demonstrate that our proposed approach computes accurate empirical null models of contact probability without any distribution assumption, corrects for binning artifacts, and provides improved statistical power relative to a previously described method. High-confidence contacts identified by Fit-Hi-C preferentially link expressed gene promoters to active enhancers identified by chromatin signatures in human embryonic stem cells (ESCs), capture 77% of RNA polymerase II-mediated enhancer-promoter interactions identified using ChIA-PET in mouse ESCs, and confirm previously validated, cell line-specific interactions in mouse cortex cells. We observe that insulators and heterochromatin regions are hubs for high-confidence contacts, while promoters and strong enhancers are involved in fewer contacts. We also observe that binding peaks of master pluripotency factors such as NANOG and POU5F1 are highly enriched in high-confidence contacts for human ESCs. Furthermore, we show that pairs of loci linked by high-confidence contacts exhibit similar replication timing in human and mouse ESCs and preferentially lie within the boundaries of topological domains for human and mouse cell lines. PMID:24501021

  18. High-Confidence Quantum Gate Tomography

    NASA Astrophysics Data System (ADS)

    Johnson, Blake; da Silva, Marcus; Ryan, Colm; Kimmel, Shelby; Donovan, Brian; Ohki, Thomas

    2014-03-01

    Debugging and verification of high-fidelity quantum gates requires the development of new tools and protocols to unwrap the performance of the gate from the rest of the sequence. Randomized benchmarking tomography[2] allows one to extract full information of the unital portion of the gate with high confidence. We report experimental confirmation of the technique's applicability to quantum gate tomography. We show that the method is robust to common experimental imperfections such as imperfect single-shot readout and state preparation. We also demonstrate the ability to characterize non-Clifford gates. To assist in the experimental implementation we introduce two techniques. ``Atomic Cliffords'' use phase ramping and frame tracking to allow single-pulse implementation of the full group of single-qubit Clifford gates. Domain specific pulse sequencers allow rapid implementation of the many thousands of sequences needed. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office contract no. W911NF-10-1-0324.

  19. The 2012 Retirement Confidence Survey: job insecurity, debt weigh on retirement confidence, savings.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2012-03-01

    Americans' confidence in their ability to retire comfortably is stagnant at historically low levels. Just 14 percent are very confident they will have enough money to live comfortably in retirement (statistically equivalent to the low of 13 percent measured in 2011 and 2009). Employment insecurity looms large: Forty-two percent identify job uncertainty as the most pressing financial issue facing most Americans today. Worker confidence about having enough money to pay for medical expenses and long-term care expenses in retirement remains well below their confidence levels for paying basic expenses. Many workers report they have virtually no savings and investments. In total, 60 percent of workers report that the total value of their household's savings and investments, excluding the value of their primary home and any defined benefit plans, is less than $25,000. Twenty-five percent of workers in the 2012 Retirement Confidence Survey say the age at which they expect to retire has changed in the past year. In 1991, 11 percent of workers said they expected to retire after age 65, and by 2012 that has grown to 37 percent. Regardless of those retirement age expectations, and consistent with prior RCS findings, half of current retirees surveyed say they left the work force unexpectedly due to health problems, disability, or changes at their employer, such as downsizing or closure. Those already in retirement tend to express higher levels of confidence than current workers about several key financial aspects of retirement. Retirees report they are significantly more reliant on Social Security as a major source of their retirement income than current workers expect to be. Although 56 percent of workers expect to receive benefits from a defined benefit plan in retirement, only 33 percent report that they and/or their spouse currently have such a benefit with a current or previous employer. More than half of workers (56 percent) report they and/or their spouse have not tried

  20. Assessing Undergraduate Students' Conceptual Understanding and Confidence of Electromagnetics

    ERIC Educational Resources Information Center

    Leppavirta, Johanna

    2012-01-01

    The study examines how students' conceptual understanding changes from high confidence with incorrect conceptions to high confidence with correct conceptions when reasoning about electromagnetics. The Conceptual Survey of Electricity and Magnetism test is weighted with students' self-rated confidence on each item in order to infer how strongly…

  1. 49 CFR 1103.23 - Confidences of a client.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to...

  2. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Confidence building activities. 26.37 Section 26... COMMUNITY Specific Sector Provisions for Medical Devices § 26.37 Confidence building activities. (a) At the beginning of the transitional period, the Joint Sectoral Group will establish a joint confidence...

  3. 49 CFR 1103.23 - Confidences of a client.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 8 2011-10-01 2011-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to...

  4. 7 CFR 97.18 - Applications handled in confidence.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Applications handled in confidence. 97.18 Section 97.18 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING... confidence. (a) Pending applications shall be handled in confidence. Except as provided below, no...

  5. 7 CFR 97.18 - Applications handled in confidence.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Applications handled in confidence. 97.18 Section 97.18 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING... confidence. (a) Pending applications shall be handled in confidence. Except as provided below, no...

  6. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

    ERIC Educational Resources Information Center

    Ochoa, Alma Rosa Aguila; Sander, Paul

    2012-01-01

    Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

  7. Does Consumer Confidence Measure Up to the Hype?

    ERIC Educational Resources Information Center

    Griffitts, Dawn

    2003-01-01

    This economic education publication features an article, "Does Consumer Confidence Measure Up to the Hype?," which defines consumer confidence and describes how it is measured. The article also explores why people might pay so much attention to consumer confidence indexes. The document also contains a question and answer section about deflation as…

  8. Subjective Probability Intervals: How to Reduce Overconfidence by Interval Evaluation

    ERIC Educational Resources Information Center

    Winman, Anders; Hansson, Patrik; Juslin, Peter

    2004-01-01

    Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost…

  9. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  10. Can we confidently diagnose pilomatricoma with fine needle aspiration cytology?

    PubMed

    Wong, Yin-Ping; Masir, Noraidah; Sharifah, Noor Akmal

    2015-01-01

    Pilomatricomas can be confidently diagnosed cytologically due to their characteristic cytomorphological features. However, these lesions are rarely encountered by cytopathologists and thus pose a diagnostic dilemma to even experienced individuals, especially when the lesions are focally sampled. We describe two cases of histologically confirmed pilomatricoma. The first case is of a 13-year-old boy with posterior cervical 'lymphadenopathy', and the second one is of a 12-year-old girl with a lower cheek swelling. Both aspirates comprised predominantly atypical basal-like cells, with prominent nucleoli. 'Ghost cells' were readily identified by cell block in case two, but cell block in case one yielded no diagnostic material. In case two, pilomatricoma was accurately diagnosed pre-operatively. A cytological suspicion of a neoplastic process was raised in case one. Despite being diagnostically challenging, pilomatricoma can be diagnosed with careful observation of two unique cytological features of the lesions: (1) pathognomonic 'ghost cells' and (2) irregular, saw-toothed, loosely cohesive basaloid cells, with prominent nucleoli. The role of thorough sampling of the lesion, with multiple passes of various sites, cannot be overemphasized. PMID:25892955

  11. Can We Confidently Diagnose Pilomatricoma with Fine Needle Aspiration Cytology?

    PubMed Central

    WONG, Yin-Ping; MASIR, Noraidah; SHARIFAH, Noor Akmal

    2015-01-01

    Pilomatricomas can be confidently diagnosed cytologically due to their characteristic cytomorphological features. However, these lesions are rarely encountered by cytopathologists and thus pose a diagnostic dilemma to even experienced individuals, especially when the lesions are focally sampled. We describe two cases of histologically confirmed pilomatricoma. The first case is of a 13-year-old boy with posterior cervical ‘lymphadenopathy’, and the second one is of a 12-year-old girl with a lower cheek swelling. Both aspirates comprised predominantly atypical basal-like cells, with prominent nucleoli. ‘Ghost cells’ were readily identified by cell block in case two, but cell block in case one yielded no diagnostic material. In case two, pilomatricoma was accurately diagnosed pre-operatively. A cytological suspicion of a neoplastic process was raised in case one. Despite being diagnostically challenging, pilomatricoma can be diagnosed with careful observation of two unique cytological features of the lesions: (1) pathognomonic ‘ghost cells’ and (2) irregular, saw-toothed, loosely cohesive basaloid cells, with prominent nucleoli. The role of thorough sampling of the lesion, with multiple passes of various sites, cannot be overemphasized. PMID:25892955

  12. Can we confidently diagnose pilomatricoma with fine needle aspiration cytology?

    PubMed

    Wong, Yin-Ping; Masir, Noraidah; Sharifah, Noor Akmal

    2015-01-01

    Pilomatricomas can be confidently diagnosed cytologically due to their characteristic cytomorphological features. However, these lesions are rarely encountered by cytopathologists and thus pose a diagnostic dilemma to even experienced individuals, especially when the lesions are focally sampled. We describe two cases of histologically confirmed pilomatricoma. The first case is of a 13-year-old boy with posterior cervical 'lymphadenopathy', and the second one is of a 12-year-old girl with a lower cheek swelling. Both aspirates comprised predominantly atypical basal-like cells, with prominent nucleoli. 'Ghost cells' were readily identified by cell block in case two, but cell block in case one yielded no diagnostic material. In case two, pilomatricoma was accurately diagnosed pre-operatively. A cytological suspicion of a neoplastic process was raised in case one. Despite being diagnostically challenging, pilomatricoma can be diagnosed with careful observation of two unique cytological features of the lesions: (1) pathognomonic 'ghost cells' and (2) irregular, saw-toothed, loosely cohesive basaloid cells, with prominent nucleoli. The role of thorough sampling of the lesion, with multiple passes of various sites, cannot be overemphasized.

  13. Interpretable, probability-based confidence metric for continuous quantitative structure-activity relationship models.

    PubMed

    Keefer, Christopher E; Kauffman, Gregory W; Gupta, Rishi Raj

    2013-02-25

    A great deal of research has gone into the development of robust confidence in prediction and applicability domain (AD) measures for quantitative structure-activity relationship (QSAR) models in recent years. Much of the attention has historically focused on structural similarity, which can be defined in many forms and flavors. A concept that is frequently overlooked in the realm of the QSAR applicability domain is how the local activity landscape plays a role in how accurate a prediction is or is not. In this work, we describe an approach that pairs information about both the chemical similarity and activity landscape of a test compound's neighborhood into a single calculated confidence value. We also present an approach for converting this value into an interpretable confidence metric that has a simple and informative meaning across data sets. The approach will be introduced to the reader in the context of models built upon four diverse literature data sets. The steps we will outline include the definition of similarity used to determine nearest neighbors (NN), how we incorporate the NN activity landscape with a similarity-weighted root-mean-square distance (wRMSD) value, and how that value is then calibrated to generate an intuitive confidence metric for prospective application. Finally, we will illustrate the prospective performance of the approach on five proprietary models whose predictions and confidence metrics have been tracked for more than a year.

  14. Oxygen uptake in maximal effort constant rate and interval running.

    PubMed

    Pratt, Daniel; O'Brien, Brendan J; Clark, Bradley

    2013-01-01

    This study investigated differences in average VO2 of maximal effort interval running to maximal effort constant rate running at lactate threshold matched for time. The average VO2 and distance covered of 10 recreational male runners (VO2max: 4158 ± 390 mL · min(-1)) were compared between a maximal effort constant-rate run at lactate threshold (CRLT), a maximal effort interval run (INT) consisting of 2 min at VO2max speed with 2 minutes at 50% of VO2 repeated 5 times, and a run at the average speed sustained during the interval run (CR submax). Data are presented as mean and 95% confidence intervals. The average VO2 for INT, 3451 (3269-3633) mL · min(-1), 83% VO2max, was not significantly different to CRLT, 3464 (3285-3643) mL · min(-1), 84% VO2max, but both were significantly higher than CR sub-max, 3464 (3285-3643) mL · min(-1), 76% VO2max. The distance covered was significantly greater in CLRT, 4431 (4202-3731) metres, compared to INT and CR sub-max, 4070 (3831-4309) metres. The novel finding was that a 20-minute maximal effort constant rate run uses similar amounts of oxygen as a 20-minute maximal effort interval run despite the greater distance covered in the maximal effort constant-rate run. PMID:24288501

  15. Relating confidence to information uncertainty in qualitative reasoning

    SciTech Connect

    Chavez, Gregory M; Zerkle, David K; Key, Brian P; Shevitz, Daniel W

    2010-12-02

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing security risk results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to determine a qualitative measure of confidence in a qualitative reasoning result from the available information uncertainty in the result using membership values in the fuzzy sets of confidence. In this study information uncertainty is quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values. Measured values of information uncertainty in each result is used to obtain the confidence. The determined confidence values are used to compare competing scenarios and understand the influences on the desired result.

  16. Confidence through consensus: a neural mechanism for uncertainty monitoring.

    PubMed

    Paz, Luciano; Insabato, Andrea; Zylberberg, Ariel; Deco, Gustavo; Sigman, Mariano

    2016-02-24

    Models that integrate sensory evidence to a threshold can explain task accuracy, response times and confidence, yet it is still unclear how confidence is encoded in the brain. Classic models assume that confidence is encoded in some form of balance between the evidence integrated in favor and against the selected option. However, recent experiments that measure the sensory evidence's influence on choice and confidence contradict these classic models. We propose that the decision is taken by many loosely coupled modules each of which represent a stochastic sample of the sensory evidence integral. Confidence is then encoded in the dispersion between modules. We show that our proposal can account for the well established relations between confidence, and stimuli discriminability and reaction times, as well as the fluctuations influence on choice and confidence.

  17. Meta-analysis to refine map position and reduce confidence intervals for delayed canopy wilting QTLs in soybean

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Slow canopy wilting in soybean has been identified as a potentially beneficial trait for ameliorating drought effects on yield. Previous research identified QTLs for slow wilting from two different bi-parental populations and this information was combined with data from three other populations to id...

  18. Confidence Intervals and "F" Tests for Intraclass Correlation Coefficients Based on Three-Way Mixed Effects Models

    ERIC Educational Resources Information Center

    Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.

    2011-01-01

    Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…

  19. Factorial Based Response Surface Modeling with Confidence Intervals for Optimizing Thermal Optical Transmission Analysis of Atmospheric Black Carbon

    EPA Science Inventory

    We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...

  20. A Direct Method for Obtaining Approximate Standard Error and Confidence Interval of Maximal Reliability for Composites with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2006-01-01

    Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…

  1. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  2. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  3. Curriculum-Based Measurement of Oral Reading: A Preliminary Investigation of Confidence Interval Overlap to Detect Reliable Growth

    ERIC Educational Resources Information Center

    Van Norman, Ethan R.

    2016-01-01

    Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R…

  4. Guide for Calculating and Interpreting Effect Sizes and Confidence Intervals in Intellectual and Developmental Disability Research Studies

    ERIC Educational Resources Information Center

    Dunst, Carl J.; Hamby, Deborah W.

    2012-01-01

    This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…

  5. The impact of varying autonomic states on the dynamic beat-to-beat QT-RR and QT-TQ interval relationships.

    PubMed

    Fossa, A A

    2008-08-01

    The beat-to-beat dynamicity of the QT-RR interval relationship is difficult to assess with the use of traditional correction factors (QTc) and changes in QTc do not accurately reflect or quantify arrhythmogenic risk. Further, the interpretation of arrhythmogenic risk is influenced by autonomic state. To visualize the QT-RR interval dynamics under varying conditions of autonomic state from impaired repolarization, we have developed a system to sequentially plot the beat-to-beat confluence of ECG data or 'clouds' obtained from conscious dogs and humans. To represent the non-uniformity of the clouds, a bootstrap sampling method that computes the mathematical centre of the uncorrected beat-to-beat QT value (QTbtb) and defines the upper and lower 95% confidence bounds is used. The same method can also be used to examine heterogeneity, hysteresis (both acceleration and deceleration) and restitution (beat-to-beat QT-TQ interval relationship). Impaired repolarization with the combination of E-4031 and L-768,673 (inhibitor of IKs current) increased heterogeneity of restitution at rest 55-91%; increased hysteresis during heart rate acceleration after isoproterenol challenge by approximately 40-60%; and dramatically diminished the minimum TQ boundary by 72% to only 28 ms. Impaired repolarization alters restitution during normal sinus rhythm and increases hysteresis/heterogeneity during heart rate acceleration following sympathetic stimulation. These findings are supported by similar clinical observations in LQT1 and LQT2 syndromes. Therefore, the assessment of the dynamic QT-RR and QT-TQ interval relationships through quantification of heterogeneity, hysteresis and restitution may allow a more accurate non-invasive evaluation of the conditions leading to cardiac arrhythmia.

  6. A new automatic blood pressure kit auscultates for accurate reading with a smartphone

    PubMed Central

    Wu, Hongjun; Wang, Bingjian; Zhu, Xinpu; Chu, Guang; Zhang, Zhi

    2016-01-01

    Abstract The widely used oscillometric automated blood pressure (BP) monitor was continuously questioned on its accuracy. A novel BP kit named Accutension which adopted Korotkoff auscultation method was then devised. Accutension worked with a miniature microphone, a pressure sensor, and a smartphone. The BP values were automatically displayed on the smartphone screen through the installed App. Data recorded in the phone could be played back and reconfirmed after measurement. They could also be uploaded and saved to the iCloud. The accuracy and consistency of this novel electronic auscultatory sphygmomanometer was preliminarily verified here. Thirty-two subjects were included and 82 qualified readings were obtained. The mean differences ± SD for systolic and diastolic BP readings between Accutension and mercury sphygmomanometer were 0.87 ± 2.86 and −0.94 ± 2.93 mm Hg. Agreements between Accutension and mercury sphygmomanometer were highly significant for systolic (ICC = 0.993, 95% confidence interval (CI): 0.989–0.995) and diastolic (ICC = 0.987, 95% CI: 0.979–0.991). In conclusion, Accutension worked accurately based on our pilot study data. The difference was acceptable. ICC and Bland–Altman plot charts showed good agreements with manual measurements. Systolic readings of Accutension were slightly higher than those of manual measurement, while diastolic readings were slightly lower. One possible reason was that Accutension captured the first and the last korotkoff sound more sensitively than human ear during manual measurement and avoided sound missing, so that it might be more accurate than traditional mercury sphygmomanometer. By documenting and analyzing of variant tendency of BP values, Accutension helps management of hypertension and therefore contributes to the mobile heath service. PMID:27512876

  7. Relaxed Phylogenetics and Dating with Confidence

    PubMed Central

    Ho, Simon Y. W; Phillips, Matthew J

    2006-01-01

    In phylogenetics, the unrooted model of phylogeny and the strict molecular clock model are two extremes of a continuum. Despite their dominance in phylogenetic inference, it is evident that both are biologically unrealistic and that the real evolutionary process lies between these two extremes. Fortunately, intermediate models employing relaxed molecular clocks have been described. These models open the gate to a new field of “relaxed phylogenetics.” Here we introduce a new approach to performing relaxed phylogenetic analysis. We describe how it can be used to estimate phylogenies and divergence times in the face of uncertainty in evolutionary rates and calibration times. Our approach also provides a means for measuring the clocklikeness of datasets and comparing this measure between different genes and phylogenies. We find no significant rate autocorrelation among branches in three large datasets, suggesting that autocorrelated models are not necessarily suitable for these data. In addition, we place these datasets on the continuum of clocklikeness between a strict molecular clock and the alternative unrooted extreme. Finally, we present analyses of 102 bacterial, 106 yeast, 61 plant, 99 metazoan, and 500 primate alignments. From these we conclude that our method is phylogenetically more accurate and precise than the traditional unrooted model while adding the ability to infer a timescale to evolution. PMID:16683862

  8. Does objective measurement of tracheal tube cuff pressures minimise adverse effects and maintain accurate cuff pressures? A systematic review and meta-analysis.

    PubMed

    Ca, Hockey; Aaj, van Zundert; Jd, Paratz

    2016-09-01

    Correct inflation pressures of the tracheal cuff are recommended to ensure adequate ventilation and prevent aspiration and adverse events. However there are conflicting views on which measurement to employ. The aim of this review was to examine whether adjustment of cuff pressure guided by objective measurement, compared with subjective measurement or observation of the pressure value alone, was able to prevent patient-related adverse effects and maintain accurate cuff pressures. A search of PubMed, Web of Science, Embase, CINAHL and ScienceDirect was conducted using keywords 'cuff pressure' and 'measure*' and related synonyms. Included studies were randomised or pseudo-randomised controlled trials investigating mechanically ventilated patients both in the intensive care unit and during surgery. Outcomes included adverse effects and the comparison of pressure measurements. Pooled analyses were performed to calculate risk ratios, effect sizes and 95% confidence intervals. Meta-analysis found preliminary evidence that adjustment of cuff pressure guided by objective measurement as compared with subjective measurement or observation of the pressure value alone, has benefit in preventing adverse effects. These included cough at two hours (odds ratio [OR] 0.42, confidence interval [CI] 0.23 to 0.79, P=0.007), hoarseness at 24 hours (OR 0.49, CI 0.31 to 0.76, P <0.002), sore throat (OR 0.73, CI 0.54 to 0.97, P <0.03), lesions of the trachea and incidences of silent aspiration (P=0.001), as well as maintaining accurate cuff pressures (Hedges' g 1.61, CI 2.69 to 0.53, P=0.003). Subjective measurement to guide adjustment or observation of the pressure value alone may lead to patient-related adverse effects and inaccuracies. It is recommended that an objective form of measurement be used. PMID:27608338

  9. Does objective measurement of tracheal tube cuff pressures minimise adverse effects and maintain accurate cuff pressures? A systematic review and meta-analysis.

    PubMed

    Hockey, C A; van Zundert, A A J; Paratz, J D

    2016-09-01

    Correct inflation pressures of the tracheal cuff are recommended to ensure adequate ventilation and prevent aspiration and adverse events. However there are conflicting views on which measurement to employ. The aim of this review was to examine whether adjustment of cuff pressure guided by objective measurement, compared with subjective measurement or observation of the pressure value alone, was able to prevent patient-related adverse effects and maintain accurate cuff pressures. A search of PubMed, Web of Science, Embase, CINAHL and ScienceDirect was conducted using keywords 'cuff pressure' and 'measure*' and related synonyms. Included studies were randomised or pseudo-randomised controlled trials investigating mechanically ventilated patients both in the intensive care unit and during surgery. Outcomes included adverse effects and the comparison of pressure measurements. Pooled analyses were performed to calculate risk ratios, effect sizes and 95% confidence intervals. Meta-analysis found preliminary evidence that adjustment of cuff pressure guided by objective measurement as compared with subjective measurement or observation of the pressure value alone, has benefit in preventing adverse effects. These included cough at two hours (odds ratio [OR] 0.42, confidence interval [CI] 0.23 to 0.79, P=0.007), hoarseness at 24 hours (OR 0.49, CI 0.31 to 0.76, P <0.002), sore throat (OR 0.73, CI 0.54 to 0.97, P <0.03), lesions of the trachea and incidences of silent aspiration (P=0.001), as well as maintaining accurate cuff pressures (Hedges' g 1.61, CI 2.69 to 0.53, P=0.003). Subjective measurement to guide adjustment or observation of the pressure value alone may lead to patient-related adverse effects and inaccuracies. It is recommended that an objective form of measurement be used.

  10. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  11. Anomalous Evidence, Confidence Change, and Theory Change.

    PubMed

    Hemmerich, Joshua A; Van Voorhis, Kellie; Wiley, Jennifer

    2016-08-01

    A novel experimental paradigm that measured theory change and confidence in participants' theories was used in three experiments to test the effects of anomalous evidence. Experiment 1 varied the amount of anomalous evidence to see if "dose size" made incremental changes in confidence toward theory change. Experiment 2 varied whether anomalous evidence was convergent (of multiple types) or replicating (similar finding repeated). Experiment 3 varied whether participants were provided with an alternative theory that explained the anomalous evidence. All experiments showed that participants' confidence changes were commensurate with the amount of anomalous evidence presented, and that larger decreases in confidence predicted theory changes. Convergent evidence and the presentation of an alternative theory led to larger confidence change. Convergent evidence also caused more theory changes. Even when people do not change theories, factors pertinent to the evidence and alternative theories decrease their confidence in their current theory and move them incrementally closer to theory change.

  12. Reflex Project: Using Model-Data Fusion to Characterize Confidence in Analyzes and Forecasts of Terrestrial C Dynamics

    NASA Astrophysics Data System (ADS)

    Fox, A. M.; Williams, M.; Richardson, A.; Cameron, D.; Gove, J. H.; Ricciuto, D. M.; Tomalleri, E.; Trudinger, C.; van Wijk, M.; Quaife, T.; Li, Z.

    2008-12-01

    The Regional Flux Estimation Experiment, REFLEX, is a model-data fusion inter-comparison project, aimed at comparing the strengths and weaknesses of various model-data fusion techniques for estimating carbon model parameters and predicting carbon fluxes and states. The key question addressed here is: what are the confidence intervals on (a) model parameters calibrated from eddy covariance (EC) and leaf area index (LAI) data and (b) on model analyses and predictions of net ecosystem C exchange (NEE) and carbon stocks? The experiment has an explicit focus on how different algorithms and protocols quantify the confidence intervals on parameter estimates and model forecasts, given the same model and data. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. Both observed daily NEE data from FluxNet sites and synthetic NEE data, generated by a model, were used to estimate the parameters and states of a simple C dynamics model. The results of the analyses supported the hypothesis that parameters linked to fast-response processes that mostly determine net ecosystem exchange of CO2 (NEE) were well constrained and well characterised. Parameters associated with turnover of wood and allocation to roots, only indirectly related to NEE, were poorly characterised. There was only weak agreement on estimations of uncertainty on NEE and its components, photosynthesis and ecosystem respiration, with some algorithms successfully locating the true values of these fluxes from synthetic experiments within relatively narrow 90% confidence intervals. This exercise has demonstrated that a range of techniques exist that can generate useful estimates of parameter probability density functions for C models from eddy covariance time series data. When these parameter PDFs are propagated to generate estimates of annual C fluxes there was a wide variation in size of the 90% confidence intervals. However, some algorithms were able to make

  13. Confidence to cook vegetables and the buying habits of Australian households.

    PubMed

    Winkler, Elisabeth; Turrell, Gavin

    2009-10-01

    Cooking skills are emphasized in nutrition promotion but their distribution among population subgroups and relationship to dietary behavior is researched by few population-based studies. This study examined the relationships between confidence to cook, sociodemographic characteristics, and household vegetable purchasing. This cross-sectional study of 426 randomly selected households in Brisbane, Australia, used a validated questionnaire to assess household vegetable purchasing habits and the confidence to cook of the person who most often prepares food for these households. The mutually adjusted odds ratios (ORs) of lacking confidence to cook were assessed across a range of demographic subgroups using multiple logistic regression models. Similarly, mutually adjusted mean vegetable purchasing scores were calculated using multiple linear regression for different population groups and for respondents with varying confidence levels. Lacking confidence to cook using a variety of techniques was more common among respondents with less education (OR 3.30; 95% confidence interval [CI] 1.01 to 10.75) and was less common among respondents who lived with minors (OR 0.22; 95% CI 0.09 to 0.53) and other adults (OR 0.43; 95% CI 0.24 to 0.78). Lack of confidence to prepare vegetables was associated with being male (OR 2.25; 95% CI 1.24 to 4.08), low education (OR 6.60; 95% CI 2.08 to 20.91), lower household income (OR 2.98; 95% CI 1.02 to 8.72) and living with other adults (OR 0.53; 95% CI 0.29 to 0.98). Households bought a greater variety of vegetables on a regular basis when the main chef was confident to prepare them (difference: 18.60; 95% CI 14.66 to 22.54), older (difference: 8.69; 95% CI 4.92 to 12.47), lived with at least one other adult (difference: 5.47; 95% CI 2.82 to 8.12) or at least one minor (difference: 2.86; 95% CI 0.17 to 5.55). Cooking skills may contribute to socioeconomic dietary differences, and may be a useful strategy for promoting fruit and vegetable

  14. [Recent advancement in relationship between DNA degradation and postmortem interval].

    PubMed

    Hao, Lu-gui; Deng, Shi-Xiong; Zhao, Xin-Cai

    2007-04-01

    Determination of postmortem interval (PMI) is one of the most valuable subjects in forensic practice. It, however, is often very difficult to accurately determine the PMI in daily practice. Forensic DNA technology has recently been used to estimate the PMI. It has certain advantage to traditional methods. This article reviews this technology with respect to its invention, development, advantage, disadvantage, and potential future applications with emphasis on correlation of DNA degradation and PMI. PMID:17619465

  15. Confidence measurement in the light of signal detection theory.

    PubMed

    Massoni, Sébastien; Gajdos, Thibault; Vergnaud, Jean-Christophe

    2014-01-01

    We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale), the Quadratic Scoring Rule (a post-wagering procedure), and the Matching Probability (MP; a generalization of the no-loss gambling method). We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory (SDT). We find that the MP provides better results in that respect. We conclude that MP is particularly well suited for studies of confidence that use SDT as a theoretical framework.

  16. Cortical alpha activity predicts the confidence in an impending action

    PubMed Central

    Kubanek, Jan; Hill, N. Jeremy; Snyder, Lawrence H.; Schalk, Gerwin

    2015-01-01

    When we make a decision, we experience a degree of confidence that our choice may lead to a desirable outcome. Recent studies in animals have probed the subjective aspects of the choice confidence using confidence-reporting tasks. These studies showed that estimates of the choice confidence substantially modulate neural activity in multiple regions of the brain. Building on these findings, we investigated the neural representation of the confidence in a choice in humans who explicitly reported the confidence in their choice. Subjects performed a perceptual decision task in which they decided between choosing a button press or a saccade while we recorded EEG activity. Following each choice, subjects indicated whether they were sure or unsure about the choice. We found that alpha activity strongly encodes a subject's confidence level in a forthcoming button press choice. The neural effect of the subjects' confidence was independent of the reaction time and independent of the sensory input modeled as a decision variable. Furthermore, the effect is not due to a general cognitive state, such as reward expectation, because the effect was specifically observed during button press choices and not during saccade choices. The neural effect of the confidence in the ensuing button press choice was strong enough that we could predict, from independent single trial neural signals, whether a subject was going to be sure or unsure of an ensuing button press choice. In sum, alpha activity in human cortex provides a window into the commitment to make a hand movement. PMID:26283892

  17. Confidence measurement in the light of signal detection theory

    PubMed Central

    Massoni, Sébastien; Gajdos, Thibault; Vergnaud, Jean-Christophe

    2014-01-01

    We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale), the Quadratic Scoring Rule (a post-wagering procedure), and the Matching Probability (MP; a generalization of the no-loss gambling method). We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory (SDT). We find that the MP provides better results in that respect. We conclude that MP is particularly well suited for studies of confidence that use SDT as a theoretical framework. PMID:25566135

  18. Relating confidence to measured information uncertainty in qualitative reasoning

    SciTech Connect

    Chavez, Gregory M; Zerkle, David K; Key, Brian P; Shevitz, Daniel W

    2010-10-07

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to relate confidence to the available information uncertainty in the result using fuzzy sets. Information uncertainty can be quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values.

  19. The antecedents and belief-polarized effects of thought confidence.

    PubMed

    Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

    2011-01-01

    This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings. PMID:21902013

  20. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    PubMed

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population.

  1. An Event Restriction Interval Theory of Tense

    ERIC Educational Resources Information Center

    Beamer, Brandon Robert

    2012-01-01

    This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…

  2. Using the Delta Method for Approximate Interval Estimation of Parameter Functions in SEM

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2004-01-01

    In applications of structural equation modeling, it is often desirable to obtain measures of uncertainty for special functions of model parameters. This article provides a didactic discussion of how a method widely used in applied statistics can be employed for approximate standard error and confidence interval evaluation of such functions. The…

  3. Meta-analytic interval estimation for standardized and unstandardized mean differences.

    PubMed

    Bonett, Douglas G

    2009-09-01

    The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented.

  4. Interval Estimation of Standardized Mean Differences in Paired-Samples Designs

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2015-01-01

    Paired-samples designs are used frequently in educational and behavioral research. In applications where the response variable is quantitative, researchers are encouraged to supplement the results of a paired-samples t-test with a confidence interval (CI) for a mean difference or a standardized mean difference. Six CIs for standardized mean…

  5. Confidence versus Performance as an Indicator of the Presence of Alternative Conceptions and Inadequate Problem-Solving Skills in Mechanics

    NASA Astrophysics Data System (ADS)

    Potgieter, Marietjie; Malatje, Esther; Gaigher, Estelle; Venter, Elsie

    2010-07-01

    This study investigated the use of performance-confidence relationships to signal the presence of alternative conceptions and inadequate problem-solving skills in mechanics. A group of 33 students entering physics at a South African university participated in the project. The test instrument consisted of 20 items derived from existing standardised tests from literature, each of which was followed by a self-reported measure of confidence of students in the correctness of their answers. Data collected for this study included students' responses to multiple-choice questions and open-ended explanations for their chosen answers. Fixed response physics and confidence data were logarithmically transformed according to the Rasch model to linear measures of performance and confidence. The free response explanations were carefully analysed for accuracy of conceptual understanding. Comparison of these results with raw score data and transformed measures of performance and confidence allowed a re-evaluation of the model developed by Hasan, Bagayoko, and Kelley in 1999 for the detection of alternative conceptions in mechanics. Application of this model to raw score data leads to inaccurate conclusions. However, application of the Hasan hypothesis to transformed measures of performance and confidence resulted in the accurate identification of items plagued by alternative conceptions. This approach also holds promise for the differentiation between over-confidence due to alternative conceptions or due to inadequate problem-solving skills. It could become a valuable tool for instructional design in mechanics.

  6. Self-reported confidence in prescribing skills correlates poorly with assessed competence in fourth-year medical students.

    PubMed

    Brinkman, David J; Tichelaar, Jelle; van Agtmael, Michiel A; de Vries, Theo P G M; Richir, Milan C

    2015-07-01

    The objective of this study was to investigate the relationship between students' self-reported confidence and their objectively assessed competence in prescribing. We assessed the competence in several prescribing skills of 403 fourth-year medical students at the VU University Medical Center, the Netherlands, in a formative simulated examination on a 10-point scale (1 = very low; 10 = very high). Afterwards, the students were asked to rate their confidence in performing each of the prescribing skills on a 5-point Likert scale (1 = very unsure; 5 = very confident). Their assessments were then compared with their self-confidence ratings. Students' overall prescribing performance was adequate (7.0 ± 0.8), but they lacked confidence in 2 essential prescribing skills. Overall, there was a weak positive correlation (r = 0.2, P < .01, 95%CI 0.1-0.3) between reported confidence and actual competence. Therefore, this study suggests that self-reported confidence is not an accurate measure of prescribing competence, and that students lack insight into their own strengths and weaknesses in prescribing. Future studies should focus on developing validated and reliable instruments so that students can assess their prescribing skills.

  7. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  8. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  9. Motor onset and diagnosis in Huntington disease using the diagnostic confidence level.

    PubMed

    Liu, Dawei; Long, Jeffrey D; Zhang, Ying; Raymond, Lynn A; Marder, Karen; Rosser, Anne; McCusker, Elizabeth A; Mills, James A; Paulsen, Jane S

    2015-12-01

    Huntington disease (HD) is a neurodegenerative disorder characterized by motor dysfunction, cognitive deterioration, and psychiatric symptoms, with progressive motor impairments being a prominent feature. The primary objectives of this study are to delineate the disease course of motor function in HD, to provide estimates of the onset of motor impairments and motor diagnosis, and to examine the effects of genetic and demographic variables on the progression of motor impairments. Data from an international multisite, longitudinal observational study of 905 prodromal HD participants with cytosine-adenine-guanine (CAG) repeats of at least 36 and with at least two visits during the followup period from 2001 to 2012 was examined for changes in the diagnostic confidence level from the Unified Huntington's Disease Rating Scale. HD progression from unimpaired to impaired motor function, as well as the progression from motor impairment to diagnosis, was associated with the linear effect of age and CAG repeat length. Specifically, for every 1-year increase in age, the risk of transition in diagnostic confidence level increased by 11% (95% CI 7-15%) and for one repeat length increase in CAG, the risk of transition in diagnostic confidence level increased by 47% (95% CI 27-69%). Findings show that CAG repeat length and age increased the likelihood of the first onset of motor impairment as well as the age at diagnosis. Results suggest that more accurate estimates of HD onset age can be obtained by incorporating the current status of diagnostic confidence level into predictive models.

  10. Confidence sharing: an economic strategy for efficient information flows in animal groups.

    PubMed

    Korman, Amos; Greenwald, Efrat; Feinerman, Ofer

    2014-10-01

    Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649

  11. Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups

    PubMed Central

    Korman, Amos; Greenwald, Efrat; Feinerman, Ofer

    2014-01-01

    Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649

  12. Chaotic dynamics from interspike intervals.

    PubMed

    Pavlov, A N; Sosnovtseva, O V; Mosekilde, E; Anishchenko, V S

    2001-03-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov exponent (LE) from point processes differ between the two models. We also consider the problem of estimating the second LE and the possibility to diagnose hyperchaotic behavior by processing spike trains. Since the second exponent is quite sensitive to the structure of the ISI series, we investigate the problem of its computation. PMID:11308739

  13. Chaotic dynamics from interspike intervals

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Sosnovtseva, Olga V.; Mosekilde, Erik; Anishchenko, Vadim S.

    2001-03-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov exponent (LE) from point processes differ between the two models. We also consider the problem of estimating the second LE and the possibility to diagnose hyperchaotic behavior by processing spike trains. Since the second exponent is quite sensitive to the structure of the ISI series, we investigate the problem of its computation.

  14. Information and Communication: Tools for Increasing Confidence in the Schools.

    ERIC Educational Resources Information Center

    Achilles, C. M.; Lintz, M. N.

    Beginning with a review of signs and signals of public attitudes toward American education over the last 15 years, this paper analyzes some concerns regarding public confidence in public schools. Following a brief introduction, issues involved in the definition and behavioral attributes of confidence are mentioned. A synopsis of three approaches…

  15. Confidence and memory: assessing positive and negative correlations.

    PubMed

    Roediger, Henry L; DeSoto, K Andrew

    2014-01-01

    The capacity to learn and remember surely evolved to help animals solve problems in their quest to reproduce and survive. In humans we assume that metacognitive processes also evolved, so that we know when to trust what we remember (i.e., when we have high confidence in our memories) and when not to (when we have low confidence). However this latter feature has been questioned by researchers, with some finding a high correlation between confidence and accuracy in reports from memory and others finding little to no correlation. In two experiments we report a recognition memory paradigm that, using the same materials (categorised lists), permits the study of positive correlations, zero correlations, and negative correlations between confidence and accuracy within the same procedure. We had subjects study words from semantic categories with the five items most frequently produced in norms omitted from the list; later, subjects were given an old/new recognition test and made confidence ratings on their judgements. Although the correlation between confidence and accuracy for studied items was generally positive, the correlation for the five omitted items was negative in some methods of analysis. We pinpoint the similarity between lures and targets as creating inversions between confidence and accuracy in memory. We argue that, while confidence is generally a useful indicant of accuracy in reports from memory, in certain environmental circumstances even adaptive processes can foster illusions of memory. Thus understanding memory illusions is similar to understanding perceptual illusions: Processes that are usually adaptive can go awry under certain circumstances.

  16. The Metamemory Approach to Confidence: A Test Using Semantic Memory

    ERIC Educational Resources Information Center

    Brewer, William F.; Sampaio, Cristina

    2012-01-01

    The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

  17. True and False Memories, Parietal Cortex, and Confidence Judgments

    ERIC Educational Resources Information Center

    Urgolites, Zhisen J.; Smith, Christine N.; Squire, Larry R.

    2015-01-01

    Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory).…

  18. Confidence Sharing in the Vocational Counselling Interview: Emergence and Repercussions

    ERIC Educational Resources Information Center

    Olry-Louis, Isabelle; Bremond, Capucine; Pouliot, Manon

    2012-01-01

    Confidence sharing is an asymmetrical dialogic episode to which both parties consent, in which one reveals something personal to the other who participates in the emergence and unfolding of the confidence. We describe how this is achieved at a discursive level within vocational counselling interviews. Based on a corpus of 64 interviews, we analyse…

  19. Utilitarian Model of Measuring Confidence within Knowledge-Based Societies

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Hung, Kuan-Ming; Liu, Chia Ju; Chiu, Houn Lin

    2009-01-01

    This paper introduces a utilitarian confidence testing statistic called Risk Inclination Model (RIM) which indexes all possible confidence wagering combinations within the confines of a defined symmetrically point-balanced test environment. This paper presents the theoretical underpinnings, a formal derivation, a hypothetical application, and…

  20. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

    ERIC Educational Resources Information Center

    Jin, Tan; Mak, Barley; Zhou, Pei

    2012-01-01

    The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

  1. Confidence vs. Authority: Visions of the Writer in Rhetorical Theory.

    ERIC Educational Resources Information Center

    Perdue, Virginia

    By building up the confidence of student writers, writing teachers hope to reduce the hostility and anxiety so often found in authoritarian introductory college composition classes. Process oriented writing theory implicitly defines confidence as a wholly personal quality resulting from students' discovery that they do have "something to say" to…

  2. A Rasch Analysis of the Teachers Music Confidence Scale

    ERIC Educational Resources Information Center

    Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria

    2007-01-01

    This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…

  3. Music Education Preservice Teachers' Confidence in Resolving Behavior Problems

    ERIC Educational Resources Information Center

    Hedden, Debra G.

    2015-01-01

    The purpose of this study was to investigate whether there would be a change in preservice teachers' (a) confidence concerning the resolution of behavior problems, (b) tactics for resolving them, (c) anticipation of problems, (d) fears about management issues, and (e) confidence in methodology and pedagogy over the time period of a one-semester…

  4. The Self-Consistency Model of Subjective Confidence

    ERIC Educational Resources Information Center

    Koriat, Asher

    2012-01-01

    How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…

  5. Prospective Teachers' Problem Solving Skills and Self-Confidence Levels

    ERIC Educational Resources Information Center

    Gursen Otacioglu, Sena

    2008-01-01

    The basic objective of the research is to determine whether the education that prospective teachers in different fields receive is related to their levels of problem solving skills and self-confidence. Within the mentioned framework, the prospective teachers' problem solving and self-confidence levels have been examined under several variables.…

  6. A (revised) confidence index for the forecasting of meteor showers

    NASA Astrophysics Data System (ADS)

    Vaubaillon, J.

    2016-01-01

    A confidence index for the forecasting of meteor showers is presented. The goal is to provide users with information regarding the way the forecasting is performed, so several degrees of confidence is achieved. This paper presents the meaning of the index coding system.

  7. RIASEC Interest and Confidence Cutoff Scores: Implications for Career Counseling

    ERIC Educational Resources Information Center

    Bonitz, Verena S.; Armstrong, Patrick Ian; Larson, Lisa M.

    2010-01-01

    One strategy commonly used to simplify the joint interpretation of interest and confidence inventories is the use of cutoff scores to classify individuals dichotomously as having high or low levels of confidence and interest, respectively. The present study examined the adequacy of cutoff scores currently recommended for the joint interpretation…

  8. Modeling Confidence and Response Time in Recognition Memory

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  9. Confidence and Gender Differences on the Mental Rotations Test

    ERIC Educational Resources Information Center

    Cooke-Simpson, Amanda; Voyer, Daniel

    2007-01-01

    The present study examined the relation between self-reported confidence ratings, performance on the Mental Rotations Test (MRT), and guessing behavior on the MRT. Eighty undergraduate students (40 males, 40 females) completed the MRT while rating their confidence in the accuracy of their answers for each item. As expected, gender differences in…

  10. Understanding public confidence in government to prevent terrorist attacks.

    SciTech Connect

    Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

    2008-04-02

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

  11. Confidence set interference with a prior quadratic bound. [in geophysics

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    Neyman's (1937) theory of confidence sets is developed as a replacement for Bayesian interference (BI) and stochastic inversion (SI) when the prior information is a hard quadratic bound. It is recommended that BI and SI be replaced by confidence set interference (CSI) only in certain circumstances. The geomagnetic problem is used to illustrate the general theory of CSI.

  12. High resolution time interval counter

    DOEpatents

    Condreva, Kenneth J.

    1994-01-01

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

  13. High resolution time interval counter

    DOEpatents

    Condreva, K.J.

    1994-07-26

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

  14. Postexercise Hypotension After Continuous, Aerobic Interval, and Sprint Interval Exercise.

    PubMed

    Angadi, Siddhartha S; Bhammar, Dharini M; Gaesser, Glenn A

    2015-10-01

    We examined the effects of 3 exercise bouts, differing markedly in intensity, on postexercise hypotension (PEH). Eleven young adults (age: 24.6 ± 3.7 years) completed 4 randomly assigned experimental conditions: (a) control, (b) 30-minute steady-state exercise (SSE) at 75-80% maximum heart rate (HRmax), (4) aerobic interval exercise (AIE): four 4-minute bouts at 90-95% HRmax, separated by 3 minutes of active recovery, and (d) sprint interval exercise (SIE): six 30-second Wingate sprints, separated by 4 minutes of active recovery. Exercise was performed on a cycle ergometer. Blood pressure (BP) was measured before exercise and every 15-minute postexercise for 3 hours. Linear mixed models were used to compare BP between trials. During the 3-hour postexercise, systolic BP (SBP) was lower (p < 0.001) after AIE (118 ± 10 mm Hg), SSE (121 ± 10 mm Hg), and SIE (121 ± 11 mm Hg) compared with control (124 ± 8 mm Hg). Diastolic BP (DBP) was also lower (p < 0.001) after AIE (66 ± 7 mm Hg), SSE (69 ± 6 mm Hg), and SIE (68 ± 8 mm Hg) compared with control (71 ± 7 mm Hg). Only AIE resulted in sustained (>2 hours) PEH, with SBP (120 ± 9 mm Hg) and DBP (68 ± 7 mm Hg) during the third-hour postexercise being lower (p ≤ 0.05) than control (124 ± 8 and 70 ± 7 mm Hg). Although all exercise bouts produced similar reductions in BP at 1-hour postexercise, the duration of PEH was greatest after AIE.

  15. Confidence limits, error bars and method comparison in molecular modeling. Part 2: comparing methods.

    PubMed

    Nicholls, A

    2016-02-01

    The calculation of error bars for quantities of interest in computational chemistry comes in two forms: (1) Determining the confidence of a prediction, for instance of the property of a molecule; (2) Assessing uncertainty in measuring the difference between properties, for instance between performance metrics of two or more computational approaches. While a former paper in this series concentrated on the first of these, this second paper focuses on comparison, i.e. how do we calculate differences in methods in an accurate and statistically valid manner. Described within are classical statistical approaches for comparing widely used metrics such as enrichment, area under the curve and Pearson's product-moment coefficient, as well as generic measures. These are considered of over single and multiple sets of data and for two or more methods that evince either independent or correlated behavior. General issues concerning significance testing and confidence limits from a Bayesian perspective are discussed, along with size-of-effect aspects of evaluation.

  16. Is fear perception special? Evidence at the level of decision-making and subjective confidence

    PubMed Central

    Mobbs, Dean; Lau, Hakwan

    2016-01-01

    Fearful faces are believed to be prioritized in visual perception. However, it is unclear whether the processing of low-level facial features alone can facilitate such prioritization or whether higher-level mechanisms also contribute. We examined potential biases for fearful face perception at the levels of perceptual decision-making and perceptual confidence. We controlled for lower-level visual processing capacity by titrating luminance contrasts of backward masks, and the emotional intensity of fearful, angry and happy faces. Under these conditions, participants showed liberal biases in perceiving a fearful face, in both detection and discrimination tasks. This effect was stronger among individuals with reduced density in dorsolateral prefrontal cortex, a region linked to perceptual decision-making. Moreover, participants reported higher confidence when they accurately perceived a fearful face, suggesting that fearful faces may have privileged access to consciousness. Together, the results suggest that mechanisms in the prefrontal cortex contribute to making fearful face perception special. PMID:27405614

  17. Can nursing students' confidence levels increase with repeated simulation activities?

    PubMed

    Cummings, Cynthia L; Connelly, Linda K

    2016-01-01

    In 2014, nursing faculty conducted a study with undergraduate nursing students on their satisfaction, confidence, and educational practice levels, as it related to simulation activities throughout the curriculum. The study was a voluntary survey conducted on junior and senior year nursing students. It consisted of 30 items based on the Student Satisfaction and Self-Confidence in Learning and the Educational Practices Questionnaire (Jeffries, 2012). Mean averages were obtained for each of the 30 items from both groups and were compared using T scores for unpaired means. The results showed that 8 of the items had a 95% confidence level and when combined the items were significant for p <.001. The items identified were those related to self-confidence and active learning. Based on these findings, it can be assumed that repeated simulation experiences can lead to an increase in student confidence and active learning. PMID:26599594

  18. Can nursing students' confidence levels increase with repeated simulation activities?

    PubMed

    Cummings, Cynthia L; Connelly, Linda K

    2016-01-01

    In 2014, nursing faculty conducted a study with undergraduate nursing students on their satisfaction, confidence, and educational practice levels, as it related to simulation activities throughout the curriculum. The study was a voluntary survey conducted on junior and senior year nursing students. It consisted of 30 items based on the Student Satisfaction and Self-Confidence in Learning and the Educational Practices Questionnaire (Jeffries, 2012). Mean averages were obtained for each of the 30 items from both groups and were compared using T scores for unpaired means. The results showed that 8 of the items had a 95% confidence level and when combined the items were significant for p <.001. The items identified were those related to self-confidence and active learning. Based on these findings, it can be assumed that repeated simulation experiences can lead to an increase in student confidence and active learning.

  19. Linking learning and confidence in developing expert practice.

    PubMed

    Currie, Kay

    2008-01-01

    This paper presents findings from a recent PhD grounded theory study exploring the practice development role of graduate specialist practitioners. A key finding within this theory is the influence of learning and confidence on the practitioner journey. The concept of confidence emerged repeatedly throughout the analysis and can be characterized as a motivational driver, a consequence of learning and gaining respect, and a condition for graduate specialist practitioners' moving on to impact in practice development. Analysis of the concept of confidence as it influences practice is limited in existing literature. This article seeks to address this gap by illustrating the centrality of learning and confidence in the development of expert specialist practices. It is anticipated that these findings will resonate with the experiences of clinicians and faculty internationally and heightened awareness of consequences of developing confidence can be utilized to strengthen the impact of a wide range of nursing programs.

  20. Development of a core confidence-higher order construct.

    PubMed

    Stajkovic, Alexander D

    2006-11-01

    The author develops core confidence as a higher order construct and suggests that a core confidence-higher order construct--not addressed by extant work motivation theories--is helpful in better understanding employee motivation in today's rapidly changing organizations. Drawing from psychology (social, clinical, and developmental) and social anthropology, the author develops propositions regarding the relationships between core confidence and performance, attitudes, and subjective well-being. The core confidence-higher order construct is proposed to be manifested by hope, self-efficacy, optimism, and resilience. The author reasons that these four variables share a common confidence core (a higher order construct) and may be considered as its manifestations. Suggestions for future research and implications of the work are discussed. PMID:17100479

  1. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  2. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  3. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  4. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

    SciTech Connect

    Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.

    2013-08-01

    To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.

  5. The Confidence Factor: Some Results of the Phi Delta Kappa (PDK) Commission on Public Confidence in Education. A Research Report.

    ERIC Educational Resources Information Center

    Wayson, W. W.; And Others

    This study sought to determine characteristics of schools and districts that enjoy the public's strong confidence and to explore how these characteristics are created and retained. A screening procedure produced useable data from 181 "high-confidence" public schools, 30 private schools, and 45 school districts. As part of a preliminary pilot…

  6. Playing with confidence: the relationship between imagery use and self-confidence and self-efficacy in youth soccer players.

    PubMed

    Munroe-Chandler, Krista; Hall, Craig; Fishburne, Graham

    2008-12-01

    Confidence has been one of the most consistent factors in distinguishing the successful from the unsuccessful athletes (Gould, Weiss, & Weinberg, 1981) and Bandura (1997) proposed that imagery is one way to enhance confidence. Therefore, the purpose of the present study was to examine the relationship between imagery use and confidence in soccer (football) players. The participants included 122 male and female soccer athletes ages 11-14 years participating in both house/ recreation (n = 72) and travel/competitive (n = 50) levels. Athletes completed three questionnaires; one measuring the frequency of imagery use, one assessing generalised self-confidence, and one assessing self-efficacy in soccer. A series of regression analyses found that Motivational General-Mastery (MG-M) imagery was a signifant predictor of self-confidence and self-efficacy in both recreational and competitive youth soccer players. More specifically, MG-M imagery accounted for between 40 and 57% of the variance for both self-confidence and self-efficacy with two other functions (MG-A and MS) contributing marginally in the self-confidence regression for recreational athletes. These findings suggest that if a youth athlete, regardless of competitive level, wants to increase his/her self-confidence or self-efficacy through the use of imagery, the MG-M function should be emphasised. PMID:18949659

  7. What Are Confidence Judgments Made of? Students' Explanations for Their Confidence Ratings and What that Means for Calibration

    ERIC Educational Resources Information Center

    Dinsmore, Daniel L.; Parkinson, Meghan M.

    2013-01-01

    Although calibration has been widely studied, questions remain about how best to capture confidence ratings, how to calculate continuous variable calibration indices, and on what exactly students base their reported confidence ratings. Undergraduates in a research methods class completed a prior knowledge assessment, two sets of readings and…

  8. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  9. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  10. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks.

    PubMed

    Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  11. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks.

    PubMed

    Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  12. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  13. Notes on interval estimation of the generalized odds ratio under stratified random sampling.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2013-05-01

    It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.

  14. Microsatellite Instability Status of Interval Colorectal Cancers in a Korean Population

    PubMed Central

    Lee, Kil Woo; Park, Soo-Kyung; Yang, Hyo-Joon; Jung, Yoon Suk; Choi, Kyu Yong; Kim, Kyung Eun; Jung, Kyung Uk; Kim, Hyung Ook; Kim, Hungdai; Chun, Ho-Kyung; Park, Dong Il

    2016-01-01

    Background/Aims A subset of patients may develop colorectal cancer after a colonoscopy that is negative for malignancy. These missed or de novo lesions are referred to as interval cancers. The aim of this study was to determine whether interval colon cancers are more likely to result from the loss of function of mismatch repair genes than sporadic cancers and to demonstrate microsatellite instability (MSI). Methods Interval cancer was defined as a cancer that was diagnosed within 5 years of a negative colonoscopy. Among the patients who underwent an operation for colorectal cancer from January 2013 to December 2014, archived cancer specimens were evaluated for MSI by sequencing microsatellite loci. Results Of the 286 colon cancers diagnosed during the study period, 25 (8.7%) represented interval cancer. MSI was found in eight of the 25 patients (32%) that presented interval cancers compared with 22 of the 261 patients (8.4%) that presented sporadic cancers (p=0.002). In the multivariable logistic regression model, MSI was associated with interval cancer (OR, 3.91; 95% confidence interval, 1.38 to 11.05). Conclusions Interval cancers were approximately four times more likely to show high MSI than sporadic cancers. Our findings indicate that certain interval cancers may occur because of distinct biological features. PMID:27114419

  15. Stochasticity and the limits to confidence when estimating R0 of Ebola and other emerging infectious diseases.

    PubMed

    Taylor, Bradford P; Dushoff, Jonathan; Weitz, Joshua S

    2016-11-01

    Dynamic models - often deterministic in nature - were used to estimate the basic reproductive number, R0, of the 2014-5 Ebola virus disease (EVD) epidemic outbreak in West Africa. Estimates of R0 were then used to project the likelihood for large outbreak sizes, e.g., exceeding hundreds of thousands of cases. Yet fitting deterministic models can lead to over-confidence in the confidence intervals of the fitted R0, and, in turn, the type and scope of necessary interventions. In this manuscript we propose a hybrid stochastic-deterministic method to estimate R0 and associated confidence intervals (CIs). The core idea is that stochastic realizations of an underlying deterministic model can be used to evaluate the compatibility of candidate values of R0 with observed epidemic curves. The compatibility is based on comparing the distribution of expected epidemic growth rates with the observed epidemic growth rate given "process noise", i.e., arising due to stochastic transmission, recovery and death events. By applying our method to reported EVD case counts from Guinea, Liberia and Sierra Leone, we show that prior estimates of R0 based on deterministic fits appear to be more confident than analysis of stochastic trajectories suggests should be possible. Moving forward, we recommend including process noise among other sources of noise when estimating R0 CIs of emerging epidemics. Our hybrid procedure represents an adaptable and easy-to-implement approach for such estimation. PMID:27524644

  16. The QT Interval and Risk of Incident Atrial Fibrillation

    PubMed Central

    Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.

    2013-01-01

    BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693

  17. Confidence-based stratification of CAD recommendations with application to breast cancer detection

    NASA Astrophysics Data System (ADS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2006-03-01

    We present a risk stratification methodology for predictions made by computer-assisted detection (CAD) systems. For each positive CAD prediction, the proposed technique assigns an individualized confidence measure as a function of the actual CAD output, the case-specific uncertainty of the prediction estimated from the system's performance for similar cases and the value of the operating decision threshold. The study was performed using a mammographic database containing 1,337 regions of interest (ROIs) with known ground truth (681 with masses, 656 with normal parenchyma). Two types of decision models (1) a support vector machine (SVM) with a radial basis function kernel and (2) a back-propagation neural network (BPNN) were developed to detect masses based on 8 morphological features automatically extracted from each ROI. The study shows that as requirements on the minimum confidence value are being restricted, the positive predictive value (PPV) for qualifying cases steadily improves (from PPV = 0.73 to PPV = 0.97 for the SVM, from PPV = 0.67 to PPV = 0.95 for the BPNN). The proposed confidence metric was successfully applied for stratification of CAD recommendations into 3 categories of different expected reliability: HIGH (PPV = 0.90), LOW (PPV = 0.30) and MEDIUM (all remaining cases). Since radiologists often disregard accurate CAD cues, an individualized confidence measure should improve their ability to correctly process visual cues and thus reduce the interpretation error associated with the detection task. While keeping the clinically determined operating point satisfied, the proposed methodology draws the CAD users' attention to cases/regions of highest risk while helping them confidently eliminate cases with low risk.

  18. Confidence Region Estimation for Groundwater Parameter Identification Problems

    NASA Astrophysics Data System (ADS)

    Vugrin, K. W.; Swiler, L. P.; Roberts, R. M.

    2007-12-01

    This presentation focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F--test method, and a Log--Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well--test results. The confidence regions for each case study are analyzed and compared. Each of the three methods produce similar and reasonable confidence regions for the case study using synthetic data. The linear approximation method grossly overestimates the confidence region for the first groundwater parameter identification case study. The F--test and Log--Likelihood methods result in similar reasonable regions for this test case. For the second groundwater parameter identification case study, the linear approximation method produces a confidence region of reasonable size. In this test case, the F--test and Log--Likelihood methods generate disjoint confidence regions of reasonable size. The differing results, capabilities, and drawbacks of all three methods are discussed. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.

  19. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

    NASA Astrophysics Data System (ADS)

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-09-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

  20. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  1. Validation, Uncertainty, and Quantitative Reliability at Confidence (QRC)

    SciTech Connect

    Logan, R W; Nitta, C K

    2002-12-06

    This paper represents a summary of our methodology for Verification and Validation and Uncertainty Quantification. A graded scale methodology is presented and related to other concepts in the literature. We describe the critical nature of quantified Verification and Validation with Uncertainty Quantification at specified Confidence levels in evaluating system certification status. Only after Verification and Validation has contributed to Uncertainty Quantification at specified confidence can rational tradeoffs of various scenarios be made. Verification and Validation methods for various scenarios and issues are applied in assessments of Quantified Reliability at Confidence and we summarize briefly how this can lead to a Value Engineering methodology for investment strategy.

  2. Scaling of light and dark time intervals.

    PubMed

    Marinova, J

    1978-01-01

    Scaling of light and dark time intervals of 0.1 to 1.1 s is performed by the mehtod of magnitude estimation with respect to a given standard. The standards differ in duration and type (light and dark). The light intervals are subjectively estimated as longer than the dark ones. The relation between the mean interval estimations and their magnitude is linear for both light and dark intervals.

  3. Permutations and topological entropy for interval maps

    NASA Astrophysics Data System (ADS)

    Misiurewicz, Michal

    2003-05-01

    Recently Bandt, Keller and Pompe (2002 Entropy of interval maps via permutations Nonlinearity 15 1595-602) introduced a method of computing the entropy of piecewise monotone interval maps by counting permutations exhibited by initial pieces of orbits. We show that for topological entropy this method does not work for arbitrary continuous interval maps. We also show that for piecewise monotone interval maps topological entropy can be computed by counting permutations exhibited by periodic orbits.

  4. Confidence and the Stock Market: An Agent-Based Approach

    PubMed Central

    Bertella, Mario A.; Pires, Felipe R.; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations—indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior. PMID:24421888

  5. Measurement of tag confidence in user generated contents retrieval

    NASA Astrophysics Data System (ADS)

    Lee, Sihyoung; Min, Hyun-Seok; Lee, Young Bok; Ro, Yong Man

    2009-01-01

    As online image sharing services are becoming popular, the importance of correctly annotated tags is being emphasized for precise search and retrieval. Tags created by user along with user-generated contents (UGC) are often ambiguous due to the fact that some tags are highly subjective and visually unrelated to the image. They cause unwanted results to users when image search engines rely on tags. In this paper, we propose a method of measuring tag confidence so that one can differentiate confidence tags from noisy tags. The proposed tag confidence is measured from visual semantics of the image. To verify the usefulness of the proposed method, experiments were performed with UGC database from social network sites. Experimental results showed that the image retrieval performance with confidence tags was increased.

  6. 78 FR 56621 - Draft Waste Confidence Generic Environmental Impact Statement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... 2, 2013 (77 FR 65137). Results of that scoping process are documented in the ``Waste Confidence... Place, 8629 J.M. Keynes Drive, Charlotte, North Carolina 28262. November 6, 2013: Hyatt Regency...

  7. Confidence and the stock market: an agent-based approach.

    PubMed

    Bertella, Mario A; Pires, Felipe R; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations--indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior.

  8. The self-assessment of confidence, by one vocational trainee

    PubMed Central

    Leonard, Colin

    1979-01-01

    A list of important topics in general practice was constructed and a trainee was asked to indicate his confidence about each topic on a one to five scale. Repeated use showed different confidence ratings by the same trainee, and an attempt was made to correlate factual knowledge by using a multiple choice questionnaire. Despite important limitations, which are described, this method may be useful in identifying suitable topics for teaching during the trainee year. PMID:541789

  9. Leaders, self-confidence, and hubris: what's the difference?

    PubMed

    Kerfoot, Karlene M

    2010-01-01

    Success can easily breed hubris. As leaders become more confident, their success can limit their learning because they develop repetitive patterns of filtering information based on past successes and discount information that does not agree with their patterns of success. It is important for leaders to stay grounded in reality and effective as their success grows. Humility, gratitude, and appreciation will avoid the overconfidence that leads to hubris. Building confidence in others is the mark of a great leader. Hubris is not.

  10. Interval approach to braneworld gravity

    NASA Astrophysics Data System (ADS)

    Carena, Marcela; Lykken, Joseph; Park, Minjoon

    2005-10-01

    Gravity in five-dimensional braneworld backgrounds may exhibit extra scalar degrees of freedom with problematic features, including kinetic ghosts and strong coupling behavior. Analysis of such effects is hampered by the standard heuristic approaches to braneworld gravity, which use the equations of motion as the starting point, supplemented by orbifold projections and junction conditions. Here we develop the interval approach to braneworld gravity, which begins with an action principle. This shows how to implement general covariance, despite allowing metric fluctuations that do not vanish on the boundaries. We reproduce simple Z2 orbifolds of gravity, even though in this approach we never perform a Z2 projection. We introduce a family of “straight gauges”, which are bulk coordinate systems in which both branes appear as straight slices in a single coordinate patch. Straight gauges are extremely useful for analyzing metric fluctuations in braneworld models. By explicit gauge-fixing, we show that a general AdS5/AdS4 setup with two branes has at most a radion, but no physical “brane-bending” modes.

  11. Decision-related cortical potentials during an auditory signal detection task with cued observation intervals

    NASA Technical Reports Server (NTRS)

    Squires, K. C.; Squires, N. K.; Hillyard, S. A.

    1975-01-01

    Cortical-evoked potentials were recorded from human subjects performing an auditory detection task with confidence rating responses. Unlike earlier studies that used similar procedures, the observation interval during which the auditory signal could occur was clearly marked by a visual cue light. By precisely defining the observation interval and, hence, synchronizing all perceptual decisions to the evoked potential averaging epoch, it was possible to demonstrate that high-confidence false alarms are accompanied by late-positive P3 components equivalent to those for equally confident hits. Moreover the hit and false alarm evoked potentials were found to covary similarly with variations in confidence rating and to have similar amplitude distributions over the scalp. In a second experiment, it was demonstrated that correct rejections can be associated with a P3 component larger than that for hits. Thus it was possible to show, within the signal detection paradigm, how the two major factors of decision confidence and expectancy are reflected in the P3 component of the cortical-evoked potential.

  12. Learning to Make Collective Decisions: The Impact of Confidence Escalation

    PubMed Central

    Mahmoodi, Ali; Bang, Dan; Ahmadabadi, Majid Nili; Bahrami, Bahador

    2013-01-01

    Little is known about how people learn to take into account others’ opinions in joint decisions. To address this question, we combined computational and empirical approaches. Human dyads made individual and joint visual perceptual decision and rated their confidence in those decisions (data previously published). We trained a reinforcement (temporal difference) learning agent to get the participants’ confidence level and learn to arrive at a dyadic decision by finding the policy that either maximized the accuracy of the model decisions or maximally conformed to the empirical dyadic decisions. When confidences were shared visually without verbal interaction, RL agents successfully captured social learning. When participants exchanged confidences visually and interacted verbally, no collective benefit was achieved and the model failed to predict the dyadic behaviour. Behaviourally, dyad members’ confidence increased progressively and verbal interaction accelerated this escalation. The success of the model in drawing collective benefit from dyad members was inversely related to confidence escalation rate. The findings show an automated learning agent can, in principle, combine individual opinions and achieve collective benefit but the same agent cannot discount the escalation suggesting that one cognitive component of collective decision making in human may involve discounting of overconfidence arising from interactions. PMID:24324677

  13. Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients

    SciTech Connect

    Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne

    2010-07-01

    Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage {<=}T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of {<=}6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.

  14. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    ERIC Educational Resources Information Center

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  15. Frequentist evaluation of intervals estimated for a binomial parameter and for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, Robert D.; Hymes, Kathryn E.; Tucker, Jordan

    2010-01-01

    Confidence intervals for a binomial parameter or for the ratio of Poisson means are commonly desired in high energy physics (HEP) applications such as measuring a detection efficiency or branching ratio. Due to the discreteness of the data, in both of these problems the frequentist coverage probability unfortunately depends on the unknown parameter. Trade-offs among desiderata have led to numerous sets of intervals in the statistics literature, while in HEP one typically encounters only the classic intervals of Clopper-Pearson (central intervals with no undercoverage but substantial over-coverage) or a few approximate methods which perform rather poorly. If strict coverage is relaxed, some sort of averaging is needed to compare intervals. In most of the statistics literature, this averaging is over different values of the unknown parameter, which is conceptually problematic from the frequentist point of view in which the unknown parameter is typically fixed. In contrast, we perform an (unconditional) average over observed data in the ratio-of-Poisson-means problem. If strict conditional coverage is desired, we recommend Clopper-Pearson intervals and intervals from inverting the likelihood ratio test (for central and non-central intervals, respectively). Lancaster's mid- P modification to either provides excellent unconditional average coverage in the ratio-of-Poisson-means problem.

  16. W4 theory for computational thermochemistry : in pursuit of confident sub-kJ/mol predictions.

    SciTech Connect

    Karton, A.; Rabinovich, E.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science

    2006-01-01

    a number of key species are in excellent agreement (better than 0.1 kcal/mol on average, 95% confidence intervals narrower than 1 kJ/mol) with the latest experimental data obtained from Active Thermochemical Tables. Lower-cost variants are proposed: the sequence W1 {yields} W2.2 {yields} 3.2 {yields} W4lite {yields} W4 is proposed as a converging hierarchy of computational thermochemistry methods. A simple a priori estimate for the importance of post-CCSD(T) correlation contributions (and hence a pessimistic estimate for the error in a W2-type calculation) is proposed.

  17. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...