Sample records for 95-percent confidence intervals

1. Explorations in Statistics: Confidence Intervals

ERIC Educational Resources Information Center

Curran-Everett, Douglas

2009-01-01

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

2. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

ERIC Educational Resources Information Center

Thompson, Bruce

2007-01-01

The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

3. Teaching Confidence Intervals Using Simulation

ERIC Educational Resources Information Center

Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

2008-01-01

Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

4. A Review of Confidence Intervals.

ERIC Educational Resources Information Center

Mauk, Anne-Marie Kimbell

This paper summarizes information leading to the recommendation that statistical significance testing be replaced, or at least accompanied by, the reporting of effect sizes and confidence intervals. It discusses the use of confidence intervals, noting that the recent report of the American Psychological Association Task Force on Statistical…

5. Confidence Intervals in Qtl Mapping by Bootstrapping

PubMed Central

Visscher, P. M.; Thompson, R.; Haley, C. S.

1996-01-01

The determination of empirical confidence intervals for the location of quantitative trait loci (QTLs) was investigated using simulation. Empirical confidence intervals were calculated using a bootstrap resampling method for a backcross population derived from inbred lines. Sample sizes were either 200 or 500 individuals, and the QTL explained 1, 5, or 10% of the phenotypic variance. The method worked well in that the proportion of empirical confidence intervals that contained the simulated QTL was close to expectation. In general, the confidence intervals were slightly conservatively biased. Correlations between the test statistic and the width of the confidence interval were strongly negative, so that the stronger the evidence for a QTL segregating, the smaller the empirical confidence interval for its location. The size of the average confidence interval depended heavily on the population size and the effect of the QTL. Marker spacing had only a small effect on the average empirical confidence interval. The LOD drop-off method to calculate empirical support intervals gave confidence intervals that generally were too small, in particular if confidence intervals were calculated only for samples above a certain significance threshold. The bootstrap method is easy to implement and is useful in the analysis of experimental data. PMID:8725246

6. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

Code of Federal Regulations, 2013 CFR

2013-07-01

... average measured CE value to the endpoints of the 95-percent (two-sided) confidence interval for the... measured CE value to the endpoints of the 95-percent (two-sided) confidence interval, expressed as...

7. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

Code of Federal Regulations, 2012 CFR

2012-07-01

... average measured CE value to the endpoints of the 95-percent (two-sided) confidence interval for the... measured CE value to the endpoints of the 95-percent (two-sided) confidence interval, expressed as...

8. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

Code of Federal Regulations, 2014 CFR

2014-07-01

... average measured CE value to the endpoints of the 95-percent (two-sided) confidence interval for the... measured CE value to the endpoints of the 95-percent (two-sided) confidence interval, expressed as...

9. Constructing Confidence Intervals for Qtl Location

PubMed Central

Mangin, B.; Goffinet, B.; Rebai, A.

1994-01-01

We describe a method for constructing the confidence interval of the QTL location parameter. This method is developed in the local asymptotic framework, leading to a linear model at each position of the putative QTL. The idea is to construct a likelihood ratio test, using statistics whose asymptotic distribution does not depend on the nuisance parameters and in particular on the effect of the QTL. We show theoretical properties of the confidence interval built with this test, and compare it with the classical confidence interval using simulations. We show in particular, that our confidence interval has the correct probability of containing the true map location of the QTL, for almost all QTLs, whereas the classical confidence interval can be very biased for QTLs having small effect. PMID:7896108

10. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

ERIC Educational Resources Information Center

2013-01-01

The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

11. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

ERIC Educational Resources Information Center

Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

2012-01-01

Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

12. Efficient computation of parameter confidence intervals

NASA Technical Reports Server (NTRS)

Murphy, Patrick C.

1987-01-01

An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

13. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures.

PubMed

Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K

2016-01-01

For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

14. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

PubMed Central

Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K.

2016-01-01

For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

15. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

ERIC Educational Resources Information Center

Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

2012-01-01

The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

16. CONFIDENCE INTERVALS AND STANDARD ERROR INTERVALS: WHAT DO THEY MEAN IN TERMS OF STATISTICAL SIGNIFICANCE?

Technology Transfer Automated Retrieval System (TEKTRAN)

We investigate the use of confidence intervals and standard error intervals to draw conclusions regarding tests of hypotheses about normal population means. Mathematical expressions and algebraic manipulations are given, and computer simulations are performed to assess the usefulness of confidence ...

17. IET. Aerial view of project, 95 percent complete. Camera facing ...

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

IET. Aerial view of project, 95 percent complete. Camera facing east. Left to right: stack, duct, mobile test cell building (TAN-624), four-rail track, dolly. Retaining wall between mobile test building and shielded control building (TAN-620) just beyond. North of control building are tank building (TAN-627) and fuel-transfer pump building (TAN-625). Guard house at upper right along exclusion fence. Construction vehicles and temporary warehouse in view near guard house. Date: June 6, 1955. INEEL negative no. 55-1462 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

18. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

ERIC Educational Resources Information Center

Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

2012-01-01

Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

19. Sample Size for the "Z" Test and Its Confidence Interval

ERIC Educational Resources Information Center

Liu, Xiaofeng Steven

2012-01-01

The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)

20. Bootstrapping Confidence Intervals for Robust Measures of Association.

ERIC Educational Resources Information Center

King, Jason E.

A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

1. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

ERIC Educational Resources Information Center

Banjanovic, Erin S.; Osborne, Jason W.

2016-01-01

Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

2. A Note on Confidence Interval Estimation and Margin of Error

ERIC Educational Resources Information Center

Gilliland, Dennis; Melfi, Vince

2010-01-01

Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

3. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

PubMed Central

Gülhan, Orekıcı Temel

2016-01-01

Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

4. Improved central confidence intervals for the ratio of Poisson means

Cousins, R. D.

The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

5. Estimation of confidence intervals for federal waterfowl harvest surveys

USGS Publications Warehouse

Geissler, P.H.

1990-01-01

I developed methods of estimating confidence intervals for the federal waterfowl harvest surveys conducted by the U.S. Fish and Wildlife Service (USFWS). I estimated flyway harvest confidence intervals for mallards (Anas platyrhynchos) (95% CI are .+-. 8% of the estimate). Canada geese (Branta canadensis) (.+-. 11%), black ducks (Anas rubripes) (.+-. 16%), canvasbacks (Aythya valisineria) (.+-. 32%), snow geese (Chen caerulescens) (.+-. 43%), and brant (Branta bernicla) (.+-. 46%). Differences between annual estimate of 10, 13, 22, 42, 43, and 58% could be detected with mallards, Canada geese, black ducks, canvasbacks, snow geese, and brant, respectively. Estimated confidence intervals for state harvests tended to be much larger than those for the flyway estimates.

6. Inference by Eye: Pictures of Confidence Intervals and Thinking about Levels of Confidence

ERIC Educational Resources Information Center

Cumming, Geoff

2007-01-01

A picture of a 95% confidence interval (CI) implicitly contains pictures of CIs of all other levels of confidence, and information about the "p"-value for testing a null hypothesis. This article discusses pictures, taken from interactive software, that suggest several ways to think about the level of confidence of a CI, "p"-values, and what…

7. Confidence Intervals for Error Rates Observed in Coded Communications Systems

Hamkins, J.

2015-05-01

We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

8. Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification

Lu, Dan; Ye, Ming; Hill, Mary C.

2012-09-01

Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for

9. Fast and Accurate Construction of Confidence Intervals for Heritability.

PubMed

Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

2016-06-01

Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

10. Confidence intervals for concentration and brightness from fluorescence fluctuation measurements.

PubMed

Pryse, Kenneth M; Rong, Xi; Whisler, Jordan A; McConnaughey, William B; Jiang, Yan-Fei; Melnykov, Artem V; Elson, Elliot L; Genin, Guy M

2012-09-01

The theory of photon count histogram (PCH) analysis describes the distribution of fluorescence fluctuation amplitudes due to populations of fluorophores diffusing through a focused laser beam and provides a rigorous framework through which the brightnesses and concentrations of the fluorophores can be determined. In practice, however, the brightnesses and concentrations of only a few components can be identified. Brightnesses and concentrations are determined by a nonlinear least-squares fit of a theoretical model to the experimental PCH derived from a record of fluorescence intensity fluctuations. The χ(2) hypersurface in the neighborhood of the optimum parameter set can have varying degrees of curvature, due to the intrinsic curvature of the model, the specific parameter values of the system under study, and the relative noise in the data. Because of this varying curvature, parameters estimated from the least-squares analysis have varying degrees of uncertainty associated with them. There are several methods for assigning confidence intervals to the parameters, but these methods have different efficacies for PCH data. Here, we evaluate several approaches to confidence interval estimation for PCH data, including asymptotic standard error, likelihood joint-confidence region, likelihood confidence intervals, skew-corrected and accelerated bootstrap (BCa), and Monte Carlo residual resampling methods. We study these with a model two-dimensional membrane system for simplicity, but the principles are applicable as well to fluorophores diffusing in three-dimensional solution. Using simulated fluorescence fluctuation data, we find the BCa method to be particularly well-suited for estimating confidence intervals in PCH analysis, and several other methods to be less so. Using the BCa method and additional simulated fluctuation data, we find that confidence intervals can be reduced dramatically for a specific non-Gaussian beam profile. PMID:23009839

11. Researchers Misunderstand Confidence Intervals and Standard Error Bars

ERIC Educational Resources Information Center

Belia, Sarah; Fidler, Fiona; Williams, Jennifer; Cumming, Geoff

2005-01-01

Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p…

12. Confidence Interval Coverage for Cohen's Effect Size Statistic

ERIC Educational Resources Information Center

Algina, James; Keselman, H. J.; Penfield, Randall D.

2006-01-01

Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population effect size…

13. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

ERIC Educational Resources Information Center

Cheung, Mike W. -L.

2009-01-01

Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

14. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

ERIC Educational Resources Information Center

Oort, Frans J.

2011-01-01

In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

15. Confidence Intervals and Replication: Where Will the Next Mean Fall?

ERIC Educational Resources Information Center

Cumming, Geoff; Maillardet, Robert

2006-01-01

Confidence intervals (CIs) give information about replication, but many researchers have misconceptions about this information. One problem is that the percentage of future replication means captured by a particular CI varies markedly, depending on where in relation to the population mean that CI falls. The authors investigated the distribution of…

16. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

ERIC Educational Resources Information Center

Wagler, Amy E.

2014-01-01

Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

17. Finite sampling corrected 3D noise with confidence intervals.

PubMed

Haefner, David P; Burks, Stephen D

2015-05-20

When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density for noise in imaging systems known as 3D noise. The goal was to decompose the 3D noise process into spatial and temporal components identify potential sources of origin. To characterize a sensor in terms of its 3D noise values, a finite number of samples in each of the three dimensions (two spatial, one temporal) were performed. In this correspondence, we developed the full sampling corrected 3D noise measurement and the corresponding confidence bounds. The accuracy of these methods was demonstrated through Monte Carlo simulations. Both the sampling correction as well as the confidence intervals can be applied a posteriori to the classic 3D noise calculation. The Matlab functions associated with this work can be found on the Mathworks file exchange ["Finite sampling corrected 3D noise with confidence intervals," https://www.mathworks.com/matlabcentral/fileexchange/49657-finite-sampling-corrected-3d-noise-with-confidence-intervals.]. PMID:26192530

18. Confidence intervals in Flow Forecasting by using artificial neural networks

Panagoulia, Dionysia; Tsekouras, George

2014-05-01

One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input

19. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

Technology Transfer Automated Retrieval System (TEKTRAN)

Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

20. Flood frequency analysis: Confidence interval estimation by test inversion bootstrapping

Schendel, Thomas; Thongwichian, Rossukon

2015-09-01

A common approach to estimate extreme flood events is the annual block maxima approach, where for each year the peak streamflow is determined and a distribution (usually the generalized extreme value distribution (GEV)) is fitted to this series of maxima. Eventually this distribution is used to estimate the return level for a defined return period. However, due to the finite sample size, the estimated return levels are associated with a range of uncertainity, usually expressed via confidence intervals. Previous publications have shown that existing bootstrapping methods for estimating the confidence intervals of the GEV yield too narrow estimates of these uncertainty ranges. Therefore, we present in this article a novel approach based on the less known test inversion bootstrapping, which we adapted especially for complex quantities like the return level. The reliability of this approach is studied and its performance is compared to other bootstrapping methods as well as the Profile Likelihood technique. It is shown that the new approach improves significantly the coverage of confidence intervals compared to other bootstrapping methods and for small sample sizes should even be favoured over the Profile Likelihood.

1. Confidence intervals for expected moments algorithm flood quantile estimates

USGS Publications Warehouse

Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

2001-01-01

Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

2. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

ERIC Educational Resources Information Center

Shi, W.; Kibria, B. M. Golam

2007-01-01

A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

3. On Efficient Confidence Intervals for the Log-Normal Mean

Chami, Peter; Antoine, Robin; Sahai, Ashok

Data obtained in biomedical research is often skewed. Examples include the incubation period of diseases like HIV/AIDS and the survival times of cancer patients. Such data, especially when they are positive and skewed, is often modeled by the log-normal distribution. If this model holds, then the log transformation produces a normal distribution. We consider the problem of constructing confidence intervals for the mean of the log-normal distribution. Several methods for doing this are known, including at least one estimator that performed better than Coxxs method for small sample sizes. We also construct a modified version of Coxxs method. Using simulation, we show that, when the sample size exceeds 30, it leads to confidence intervals that have good overall properties and are better than Coxxs method. More precisely, the actual coverage probability of our method is closer to the nominal coverage probability than is the case with Coxxs method. In addition, the new method is computationally much simpler than other well-known methods.

4. Covariate-adjusted confidence interval for the intraclass correlation coefficient.

PubMed

Shoukri, Mohamed M; Donner, Allan; El-Dali, Abdelmoneim

2013-09-01

A crucial step in designing a new study is to estimate the required sample size. For a design involving cluster sampling, the appropriate sample size depends on the so-called design effect, which is a function of the average cluster size and the intracluster correlation coefficient (ICC). It is well-known that under the framework of hierarchical and generalized linear models, a reduction in residual error may be achieved by including risk factors as covariates. In this paper we show that the covariate design, indicating whether the covariates are measured at the cluster level or at the within-cluster subject level affects the estimation of the ICC, and hence the design effect. Therefore, the distinction between these two types of covariates should be made at the design stage. In this paper we use the nested-bootstrap method to assess the accuracy of the estimated ICC for continuous and binary response variables under different covariate structures. The codes of two SAS macros are made available by the authors for interested readers to facilitate the construction of confidence intervals for the ICC. Moreover, using Monte Carlo simulations we evaluate the relative efficiency of the estimators and evaluate the accuracy of the coverage probabilities of a 95% confidence interval on the population ICC. The methodology is illustrated using a published data set of blood pressure measurements taken on family members. PMID:23871746

5. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes

PubMed Central

2016-01-01

Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper. PMID:26828651

6. Concept of a (1-. cap alpha. ) performance confidence interval

SciTech Connect

Leong, H.H.; Johnson, G.R.; Bechtel, T.N.

1980-01-01

A multi-input, single-output system is assumed to be represented by some model. The distribution functions of the input and the output variables are considered to be at least obtainable through experimental data. Associated with the computer response of the model corresponding to given inputs, a conditional pseudoresponse set is generated. This response can be constructed by means of the model by using the simulated pseudorandom input variates from a neighborhood defined by a preassigned probability allowance. A pair of such pseudoresponse values can then be computed by a procedure corresponding to a (1-..cap alpha..) probability for the conditional pseudoresponse set. The range defined by such a pair is called a (1-..cap alpha..) performance confidence interval with respect to the model. The application of this concept can allow comparison of the merit of two models describing the same system, or it can detect a system change when the current response is out of the performance interval with respect to the previously identified model. 6 figures.

7. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

ERIC Educational Resources Information Center

Weber, Deborah A.

Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

8. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

ERIC Educational Resources Information Center

Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

2014-01-01

Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

9. Behavior Detection using Confidence Intervals of Hidden Markov Models

SciTech Connect

Griffin, Christopher H

2009-01-01

Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.

10. Bootstrap confidence intervals in multi-level simultaneous component analysis.

PubMed

Timmerman, Marieke E; Kiers, Henk A L; Smilde, Age K; Ceulemans, Eva; Stouten, Jeroen

2009-05-01

Multi-level simultaneous component analysis (MLSCA) was designed for the exploratory analysis of hierarchically ordered data. MLSCA specifies a component model for each level in the data, where appropriate constraints express possible similarities between groups of objects at a certain level, yielding four MLSCA variants. The present paper discusses different bootstrap strategies for estimating confidence intervals (CIs) on the individual parameters. In selecting a proper strategy, the main issues to address are the resampling scheme and the non-uniqueness of the parameters. The resampling scheme depends on which level(s) in the hierarchy are considered random, and which fixed. The degree of non-uniqueness depends on the MLSCA variant, and, in two variants, the extent to which the user exploits the transformational freedom. A comparative simulation study examines the quality of bootstrap CIs of different MLSCA parameters. Generally, the quality of bootstrap CIs appears to be good, provided the sample sizes are sufficient at each level that is considered to be random. The latter implies that if more than a single level is considered random, the total number of observations necessary to obtain reliable inferential information increases dramatically. An empirical example illustrates the use of bootstrap CIs in MLSCA. PMID:18086338

11. Exact and Best Confidence Intervals for the Ability Parameter of the Rasch Model.

ERIC Educational Resources Information Center

Klauer, Karl Christoph

1991-01-01

Smallest exact confidence intervals for the ability parameter of the Rasch model are derived and compared to the traditional asymptotically valid intervals based on Fisher information. Tables of exact confidence intervals, termed Clopper-Pearson intervals, can be drawn up with a computer program developed by K. Klauer. (SLD)

12. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

ERIC Educational Resources Information Center

Capraro, Mary Margaret

This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual" (2001)…

13. Using Confidence Intervals and Recurrence Intervals to Determine Precipitation Delivery Mechanisms Responsible for Mass Wasting Events.

Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.

2015-12-01

Mass wasting events are geologic hazards that impact human life and property across a variety of landscapes. These movements can be triggered by tectonic activity, anomalous precipitation events, or both; acting to decrease the factor of safety ratio on a hillslope to the point of failure. There exists an active hazard landscape in the West Boulder River drainage of Park Co., MT in which the mechanisms of slope failure are unknown. It is known that region has not seen significant tectonic activity within the last decade, leaving anomalous precipitation events as the likely trigger for slope failures in the landscape. Precipitation can be delivered to a landscape via rainfall or snow; it was the aim of this study to determine the precipitation delivery mechanism most likely responsible for movements in the West Boulder drainage following the Jungle Wildfire of 2006. Data was compiled from four SNOTEL sites in the surrounding area, spanning 33 years, focusing on, but not limited to; maximum snow water equivalent (SWE) values in a water year, median SWE values on the date which maximum SWE was recorded in a water year, the total precipitation accumulated in a water year, etc. Means were computed and 99% confidence intervals were constructed around these means. Recurrence intervals and exceedance probabilities were computed for maximum SWE values and total precipitation accumulated in a water year to determine water years with anomalous precipitation. It was determined that the water year 2010-2011 received an anomalously high amount of SWE, and snow melt in the spring of this water year likely triggered recent mass waste movements. This data is further supported by Google Earth imagery, showing movements between 2009 and 2011. Return intervals for the maximum SWE value in 2010-11 for the Placer Basin SNOTEL site was 34 years, while return intervals for the Box Canyon and Monument Peak SNOTEL sites were 17.5 and 17 years respectively. Max SWE values lie outside the

14. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

2010-01-01

This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

15. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

USGS Publications Warehouse

Christensen, S.; Cooley, R.L.

1996-01-01

Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

16. Confidence Intervals about Score Reliability Coefficients, Please: An "EPM" Guidelines Editorial.

ERIC Educational Resources Information Center

Fan, Xitao; Thompson, Bruce

2001-01-01

Illustrates a number of ways in which confidence intervals for reliability coefficients can be estimated. Suggests that authors who submit articles to "Educational and Psychological Measurement" report confidence intervals for reliability estimates whenever they report score reliabilities and that they note the interval estimation methods used.…

17. Fixed-Width Confidence Intervals in Linear Regression with Applications to the Johnson-Neyman Technique.

ERIC Educational Resources Information Center

Aitkin, Murray A.

Fixed-width confidence intervals for a population regression line over a finite interval of x have recently been derived by Gafarian. The method is extended to provide fixed-width confidence intervals for the difference between two population regression lines, resulting in a simple procedure analogous to the Johnson-Neyman technique. (Author)

18. Improved confidence intervals when the sample is counted an integer times longer than the blank.

PubMed

Potter, William Edward; Strzelczyk, Jadwiga Jodi

2011-05-01

Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner. PMID:21451310

19. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

ERIC Educational Resources Information Center

Finch, W. Holmes; French, Brian F.

2012-01-01

Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

20. "Confidence Intervals for Gamma-family Measures of Ordinal Association": Correction

ERIC Educational Resources Information Center

Psychological Methods, 2008

2008-01-01

Reports an error in "Confidence intervals for gamma-family measures of ordinal association" by Carol M. Woods (Psychological Methods, 2007[Jun], Vol 12[2], 185-204). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's r-sub(s). An error in the author's C++ code…

1. Using Asymptotic Results to Obtain a Confidence Interval for the Population Median

ERIC Educational Resources Information Center

2007-01-01

Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…

2. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

ERIC Educational Resources Information Center

Tryon, Warren W.; Lewis, Charles

2009-01-01

Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

3. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

ERIC Educational Resources Information Center

2011-01-01

The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10, the results…

4. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

ERIC Educational Resources Information Center

Strazzeri, Kenneth Charles

2013-01-01

The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students'…

5. What Confidence Intervals "Really" Do and Why They Are So Important for Middle Grades Educational Research

ERIC Educational Resources Information Center

Skidmore, Susan Troncoso

2009-01-01

Recommendations made by major educational and psychological organizations (American Educational Research Association, 2006; American Psychological Association, 2001) call for researchers to regularly report confidence intervals. The purpose of the present paper is to provide support for the use of confidence intervals. To contextualize this…

6. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

PubMed Central

Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

2014-01-01

The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

7. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

PubMed

Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

2014-01-01

The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

8. Multiplicative scale uncertainties in the unified approach for constructing confidence intervals

SciTech Connect

Smith, Elton

2009-01-01

We have investigated how uncertainties in the estimation of the detection efficiency affect the 90\\% confidence intervals in the unified approach for constructing confidence intervals. The study has been conducted for experiments where the number of detected events is large and can be described by a Gaussian probability density function. We also assume the detection efficiency has a Gaussian probability density and study the range of the relative uncertainties $\\sigma_\\epsilon$ between 0 and 30\\%. We find that the confidence intervals provide proper coverage and increase smoothly and continuously from the intervals that ignore scale uncertainties with a quadratic dependence on $\\sigma_\\epsilon$.

9. Evaluation of confidence intervals for a steady-state leaky aquifer model

USGS Publications Warehouse

Christensen, S.; Cooley, R.L.

1999-01-01

The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence

10. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

DOE PAGESBeta

Verbeke, Jerome M.

2016-03-09

From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

11. Population forecasts and confidence intervals for Sweden: a comparison of model-based and empirical approaches.

PubMed

Cohen, J E

1986-02-01

This paper compares several methods of generating confidence intervals for forecasts of population size. Two rest on a demographic model for age-structured populations with stochastic fluctuations in vital rates. Two rest on empirical analyses of past forecasts of population sizes of Sweden at five-year intervals from 1780 to 1980 inclusive. Confidence intervals produced by the different methods vary substantially. The relative sizes differ in the various historical periods. The narrowest intervals offer a lower bound on uncertainty about the future. Procedures for estimating a range of confidence intervals are tentatively recommended. A major lesson is that finitely many observations of the past and incomplete theoretical understanding of the present and future can justify at best a range of confidence intervals for population projections. Uncertainty attaches not only to the point forecasts of future population, but also to the estimates of those forecasts' uncertainty. PMID:3484356

12. Estimation and confidence intervals for empirical mixing distributions

USGS Publications Warehouse

1995-01-01

Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

13. Confidence-interval construction for rate ratio in matched-pair studies with incomplete data.

PubMed

Li, Hui-Qiong; Chan, Ivan S F; Tang, Man-Lai; Tian, Guo-Liang; Tang, Nian-Sheng

2014-01-01

Matched-pair design is often used in clinical trials to increase the efficiency of establishing equivalence between two treatments with binary outcomes. In this article, we consider such a design based on rate ratio in the presence of incomplete data. The rate ratio is one of the most frequently used indices in comparing efficiency of two treatments in clinical trials. In this article, we propose 10 confidence-interval estimators for the rate ratio in incomplete matched-pair designs. A hybrid method that recovers variance estimates required for the rate ratio from the confidence limits for single proportions is proposed. It is noteworthy that confidence intervals based on this hybrid method have closed-form solution. The performance of the proposed confidence intervals is evaluated with respect to their exact coverage probability, expected confidence interval width, and distal and mesial noncoverage probability. The results show that the hybrid Agresti-Coull confidence interval based on Fieller's theorem performs satisfactorily for small to moderate sample sizes. Two real examples from clinical trials are used to illustrate the proposed confidence intervals. PMID:24697611

14. Bayesian methods of confidence interval construction for the population attributable risk from cross-sectional studies.

PubMed

Pirikahu, Sarah; Jones, Geoffrey; Hazelton, Martin L; Heuer, Cord

2016-08-15

Population attributable risk measures the public health impact of the removal of a risk factor. To apply this concept to epidemiological data, the calculation of a confidence interval to quantify the uncertainty in the estimate is desirable. However, because perhaps of the confusion surrounding the attributable risk measures, there is no standard confidence interval or variance formula given in the literature. In this paper, we implement a fully Bayesian approach to confidence interval construction of the population attributable risk for cross-sectional studies. We show that, in comparison with a number of standard Frequentist methods for constructing confidence intervals (i.e. delta, jackknife and bootstrap methods), the Bayesian approach is superior in terms of percent coverage in all except a few cases. This paper also explores the effect of the chosen prior on the coverage and provides alternatives for particular situations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26799685

15. Simple table for estimating confidence interval of discrepancy frequencies in microbiological safety evaluation.

PubMed

Lamy, Brigitte; Delignette-Muller, Marie Laure; Baty, Florent; Carret, Gerard

2004-01-01

We provide a simple tool to determine discrepancies confidence interval (CI) in microbiology validation studies such as technical accuracy of a qualitative test result. This tool enables to determine exact confidence interval (binomial CI) from an observed frequency when normal approximation is inadequate, that is, in case of rare events. This tool has daily applications in microbiology and we are presenting an example of its application to antimicrobial susceptibility systems evaluation. PMID:14706759

16. Evaluation of Jackknife and Bootstrap for Defining Confidence Intervals for Pairwise Agreement Measures

PubMed Central

Severiano, Ana; Carriço, João A.; Robinson, D. Ashley; Ramirez, Mário; Pinto, Francisco R.

2011-01-01

Several research fields frequently deal with the analysis of diverse classification results of the same entities. This should imply an objective detection of overlaps and divergences between the formed clusters. The congruence between classifications can be quantified by clustering agreement measures, including pairwise agreement measures. Several measures have been proposed and the importance of obtaining confidence intervals for the point estimate in the comparison of these measures has been highlighted. A broad range of methods can be used for the estimation of confidence intervals. However, evidence is lacking about what are the appropriate methods for the calculation of confidence intervals for most clustering agreement measures. Here we evaluate the resampling techniques of bootstrap and jackknife for the calculation of the confidence intervals for clustering agreement measures. Contrary to what has been shown for some statistics, simulations showed that the jackknife performs better than the bootstrap at accurately estimating confidence intervals for pairwise agreement measures, especially when the agreement between partitions is low. The coverage of the jackknife confidence interval is robust to changes in cluster number and cluster size distribution. PMID:21611165

17. Confidence Intervals for True Scores Using the Skew-Normal Distribution

ERIC Educational Resources Information Center

Garcia-Perez, Miguel A.

2010-01-01

A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

18. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

2016-04-01

The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

19. Confidence intervals for the selected population in randomized trials that adapt the population enrolled

PubMed Central

Rosenblum, Michael

2014-01-01

It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence during the trial that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect in the selected population. Confidence interval methods that fail to account for the adaptive nature of the design may fail to have the desired coverage probability. We provide a new procedure for constructing confidence intervals having at least 95% coverage probability, uniformly over a large class Q of possible data generating distributions. Our method involves computing the minimum factor c by which a standard confidence interval must be expanded in order to have, asymptotically, at least 95% coverage probability, uniformly over Q. Computing the expansion factor c is not trivial, since it is not a priori clear, for a given decision rule, which data generating distribution leads to the worst-case coverage probability. We give an algorithm that computes c, and prove an optimality property for the resulting confidence interval procedure. PMID:23553577

20. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

Shin, H.; Heo, J.; Kim, T.; Jung, Y.

2007-12-01

The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

1. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

USGS Publications Warehouse

Cooley, Richard L.; Vecchia, Aldo V.

1987-01-01

A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

2. Effective confidence interval estimation of fault-detection process of software reliability growth models

Fang, Chih-Chiang; Yeh, Chun-Wu

2016-09-01

The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

3. A self-normalized confidence interval for the mean of a class of nonstationary processes.

PubMed

Zhao, Zhibiao

2011-01-01

We construct an asymptotic confidence interval for the mean of a class of nonstationary processes with constant mean and time-varying variances. Due to the large number of unknown parameters, traditional approaches based on consistent estimation of the limiting variance of sample mean through moving block or non-overlapping block methods are not applicable. Under a block-wise asymptotically equal cumulative variance assumption, we propose a self-normalized confidence interval that is robust against the nonstationarity and dependence structure of the data. We also apply the same idea to construct an asymptotic confidence interval for the mean difference of nonstationary processes with piecewise constant means. The proposed methods are illustrated through simulations and an application to global temperature series. PMID:24319293

4. A self-normalized confidence interval for the mean of a class of nonstationary processes

PubMed Central

ZHAO, ZHIBIAO

2013-01-01

Summary We construct an asymptotic confidence interval for the mean of a class of nonstationary processes with constant mean and time-varying variances. Due to the large number of unknown parameters, traditional approaches based on consistent estimation of the limiting variance of sample mean through moving block or non-overlapping block methods are not applicable. Under a block-wise asymptotically equal cumulative variance assumption, we propose a self-normalized confidence interval that is robust against the nonstationarity and dependence structure of the data. We also apply the same idea to construct an asymptotic confidence interval for the mean difference of nonstationary processes with piecewise constant means. The proposed methods are illustrated through simulations and an application to global temperature series. PMID:24319293

5. Exact confidence intervals for channelized Hotelling observer performance in image quality studies.

PubMed

Wunderlich, Adam; Noo, Frederic; Gallas, Brandon D; Heilbrun, Marta E

2015-02-01

Task-based assessments of image quality constitute a rigorous, principled approach to the evaluation of imaging system performance. To conduct such assessments, it has been recognized that mathematical model observers are very useful, particularly for purposes of imaging system development and optimization. One type of model observer that has been widely applied in the medical imaging community is the channelized Hotelling observer (CHO), which is well-suited to known-location discrimination tasks. In the present work, we address the need for reliable confidence interval estimators of CHO performance. Specifically, we show that the bias associated with point estimates of CHO performance can be overcome by using confidence intervals proposed by Reiser for the Mahalanobis distance. In addition, we find that these intervals are well-defined with theoretically-exact coverage probabilities, which is a new result not proved by Reiser. The confidence intervals are tested with Monte Carlo simulation and demonstrated with two examples comparing X-ray CT reconstruction strategies. Moreover, commonly-used training/testing approaches are discussed and compared to the exact confidence intervals. MATLAB software implementing the estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:25265629

6. Comparison of Approaches to Constructing Confidence Intervals for Mediating Effects Using Structural Equation Models

ERIC Educational Resources Information Center

Cheung, Mike W. L.

2007-01-01

Mediators are variables that explain the association between an independent variable and a dependent variable. Structural equation modeling (SEM) is widely used to test models with mediating effects. This article illustrates how to construct confidence intervals (CIs) of the mediating effects for a variety of models in SEM. Specifically, mediating…

7. Assessing Conformance with Benford's Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals.

PubMed

Lesperance, M; Reed, W J; Stephens, M A; Tsao, C; Wilton, B

2016-01-01

Benford's Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford's Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford's Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud(2), Pearson's chi-square statistic and 100(1 - α)% Goodman's simultaneous confidence intervals be computed when assessing conformance with Benford's Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size. PMID:27018999

8. Sample Size Planning for the Standardized Mean Difference: Accuracy in Parameter Estimation via Narrow Confidence Intervals

ERIC Educational Resources Information Center

Kelley, Ken; Rausch, Joseph R.

2006-01-01

Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no…

9. Making Subjective Judgments in Quantitative Studies: The Importance of Using Effect Sizes and Confidence Intervals

ERIC Educational Resources Information Center

Callahan, Jamie L.; Reio, Thomas G., Jr.

2006-01-01

At least twenty-three journals in the social sciences purportedly require authors to report effect sizes and, to a much lesser extent, confidence intervals; yet these requirements are rarely clear in the information for contributors. This article reviews some of the literature criticizing the exclusive use of null hypothesis significance testing…

10. Assessing Conformance with Benford’s Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals

PubMed Central

Lesperance, M.; Reed, W. J.; Stephens, M. A.; Tsao, C.; Wilton, B.

2016-01-01

Benford’s Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford’s Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford’s Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud2, Pearson’s chi-square statistic and 100(1 − α)% Goodman’s simultaneous confidence intervals be computed when assessing conformance with Benford’s Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size. PMID:27018999

11. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

ERIC Educational Resources Information Center

Yurdugul, Halil

2009-01-01

This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

12. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

ERIC Educational Resources Information Center

Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

2010-01-01

The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

13. Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression

ERIC Educational Resources Information Center

Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.

2007-01-01

The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…

14. Confidence Intervals for an Effect Size Measure in Multiple Linear Regression

ERIC Educational Resources Information Center

Algina, James; Keselman, H. J.; Penfield, Randall D.

2007-01-01

The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…

15. A Method for Obtaining Standard Errors and Confidence Intervals of Composite Reliability for Congeneric Items.

ERIC Educational Resources Information Center

Raykov, Tenko

1998-01-01

Proposes a method for obtaining standard errors and confidence intervals of composite reliability coefficients based on bootstrap methods and using a structural-equation-modeling framework for estimating the composite reliability of congeneric measures (T. Raykov, 1997). Demonstrates the approach with simulated data. (SLD)

16. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

ERIC Educational Resources Information Center

Liu, Xiaofeng Steven

2010-01-01

This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

17. Multivariate Effect Size Estimation: Confidence Interval Construction via Latent Variable Modeling

ERIC Educational Resources Information Center

Raykov, Tenko; Marcoulides, George A.

2010-01-01

A latent variable modeling method is outlined for constructing a confidence interval (CI) of a popular multivariate effect size measure. The procedure uses the conventional multivariate analysis of variance (MANOVA) setup and is applicable with large samples. The approach provides a population range of plausible values for the proportion of…

18. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

ERIC Educational Resources Information Center

Doebler, Anna; Doebler, Philipp; Holling, Heinz

2013-01-01

The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

19. Exact confidence intervals for the average causal effect on a binary outcome.

PubMed

Li, Xinran; Ding, Peng

2016-03-15

Based on the physical randomization of completely randomized experiments, in a recent article in Statistics in Medicine, Rigdon and Hudgens propose two approaches to obtaining exact confidence intervals for the average causal effect on a binary outcome. They construct the first confidence interval by combining, with the Bonferroni adjustment, the prediction sets for treatment effects among treatment and control groups, and the second one by inverting a series of randomization tests. With sample size n, their second approach requires performing O(n(4) )randomization tests. We demonstrate that the physical randomization also justifies other ways to constructing exact confidence intervals that are more computationally efficient. By exploiting recent advances in hypergeometric confidence intervals and the stochastic order information of randomization tests, we propose approaches that either do not need to invoke Monte Carlo or require performing at most O(n(2) )randomization tests. We provide technical details and R code in the Supporting Information. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26833798

20. Confidence intervals for intraclass correlation coefficients in a nonlinear dose-response meta-analysis.

PubMed

Demetrashvili, Nino; Van den Heuvel, Edwin R

2015-06-01

This work is motivated by a meta-analysis case study on antipsychotic medications. The Michaelis-Menten curve is employed to model the nonlinear relationship between the dose and D2 receptor occupancy across multiple studies. An intraclass correlation coefficient (ICC) is used to quantify the heterogeneity across studies. To interpret the size of heterogeneity, an accurate estimate of ICC and its confidence interval is required. The goal is to apply a recently proposed generic beta-approach for construction the confidence intervals on ICCs for linear mixed effects models to nonlinear mixed effects models using four estimation methods. These estimation methods are the maximum likelihood, second-order generalized estimating equations and two two-step procedures. The beta-approach is compared with a large sample normal approximation (delta method) and bootstrapping. The confidence intervals based on the delta method and the nonparametric percentile bootstrap with various resampling strategies failed in our settings. The beta-approach demonstrates good coverages with both two-step estimation methods and consequently, it is recommended for the computation of confidence interval for ICCs in nonlinear mixed effects models for small studies. PMID:25703393

1. ADEQUACY OF CONFIDENCE INTERVAL ESTIMATES OF YIELD RESPONSES TO OZONE ESTIMATED FROM NCLAN DATA

EPA Science Inventory

Three methods of estimating confidence intervals for the parameters of Weibull nonlinear models are examined. hese methods are based on linear approximation theory (Wald), the likelihood ratio test, and Clarke's (1987) procedures. nalyses are based on Weibull dose-response equati...

2. A Note on Confidence Intervals for Two-Group Latent Mean Effect Size Measures

ERIC Educational Resources Information Center

Choi, Jaehwa; Fan, Weihua; Hancock, Gregory R.

2009-01-01

This note suggests delta method implementations for deriving confidence intervals for a latent mean effect size measure for the case of 2 independent populations. A hypothetical kindergarten reading example using these implementations is provided, as is supporting LISREL syntax. (Contains 1 table.)

3. Note on a Confidence Interval for the Squared Semipartial Correlation Coefficient

ERIC Educational Resources Information Center

Algina, James; Keselman, Harvey J.; Penfield, Randall J.

2008-01-01

A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…

4. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation

PubMed Central

Kisbu-Sakarya, Yasemin; MacKinnon, David P.; Miočević, Milica

2014-01-01

5. The Direct Integral Method for Confidence Intervals for the Ratio of Two Location Parameters

PubMed Central

Wang, Yanqing; Wang, Suojin; Carroll, Raymond J.

2015-01-01

Summary In a relative risk analysis of colorectal caner on nutrition intake scores across genders, we show that, surprisingly, when comparing the relative risks for men and women based on the index of a weighted sum of various nutrition scores, the problem reduces to forming a confidence interval for the ratio of two (asymptotically) normal random variables. The latter is an old problem, with a substantial literature. However, our simulation results suggest that existing methods often either give inaccurate coverage probabilities or have a positive probability to produce confidence intervals with infinite length. Motivated by such a problem, we develop a new methodology which we call the Direct Integral Method for Ratios (DIMER), which, unlike the other methods, is based directly on the distribution of the ratio. In simulations, we compare this method to many others. These simulations show that, generally, DIMER more closely achieves the nominal confidence level, and in those cases that the other methods achieve the nominal levels, DIMER has comparable confidence interval lengths. The methodology is then applied to a real data set, and with follow up simulations. PMID:25939421

6. Finite sample pointwise confidence intervals for a survival distribution with right-censored data.

PubMed

Fay, Michael P; Brittain, Erica H

2016-07-20

We review and develop pointwise confidence intervals for a survival distribution with right-censored data for small samples, assuming only independence of censoring and survival. When there is no censoring, at each fixed time point, the problem reduces to making inferences about a binomial parameter. In this case, the recently developed beta product confidence procedure (BPCP) gives the standard exact central binomial confidence intervals of Clopper and Pearson. Additionally, the BPCP has been shown to be exact (gives guaranteed coverage at the nominal level) for progressive type II censoring and has been shown by simulation to be exact for general independent right censoring. In this paper, we modify the BPCP to create a 'mid-p' version, which reduces to the mid-p confidence interval for a binomial parameter when there is no censoring. We perform extensive simulations on both the standard and mid-p BPCP using a method of moments implementation that enforces monotonicity over time. All simulated scenarios suggest that the standard BPCP is exact. The mid-p BPCP, like other mid-p confidence intervals, has simulated coverage closer to the nominal level but may not be exact for all survival times, especially in very low censoring scenarios. In contrast, the two asymptotically-based approximations have lower than nominal coverage in many scenarios. This poor coverage is due to the extreme inflation of the lower error rates, although the upper limits are very conservative. Both the standard and the mid-p BPCP methods are available in our bpcp R package. Published 2016. This article is US Government work and is in the public domain in the USA. PMID:26891706

7. The use of latin hypercube sampling for the efficient estimation of confidence intervals

SciTech Connect

Grabaskas, D.; Denning, R.; Aldemir, T.; Nakayama, M. K.

2012-07-01

Latin hypercube sampling (LHS) has long been used as a way of assuring adequate sampling of the tails of distributions in a Monte Carlo analysis and provided the framework for the uncertainty analysis performed in the NUREG-1150 risk assessment. However, this technique has not often been used in the performance of regulatory analyses due to the inability to establish confidence levels on the quantiles of the output distribution. Recent work has demonstrated a method that makes this possible. This method is compared to the procedure of crude Monte Carlo using order statistics, which is currently used to establish confidence levels. The results of several statistical examples demonstrate that the LHS confidence interval method can provide a more accurate and precise solution, but issues remain when applying the technique generally. (authors)

8. Pointwise confidence intervals for a survival distribution with small samples or heavy censoring.

PubMed

Fay, Michael P; Brittain, Erica H; Proschan, Michael A

2013-09-01

We propose a beta product confidence procedure (BPCP) that is a non-parametric confidence procedure for the survival curve at a fixed time for right-censored data assuming independent censoring. In such situations, the Kaplan-Meier estimator is typically used with an asymptotic confidence interval (CI) that can have coverage problems when the number of observed failures is not large, and/or when testing the latter parts of the curve where there are few remaining subjects at risk. The BPCP guarantees central coverage (i.e. ensures that both one-sided error rates are no more than half of the total nominal rate) when there is no censoring (in which case it reduces to the Clopper-Pearson interval) or when there is progressive type II censoring (i.e. when censoring only occurs immediately after failures on fixed proportions of the remaining individuals). For general independent censoring, simulations show that the BPCP maintains central coverage in many situations where competing methods can have very substantial error rate inflation for the lower limit. The BPCP gives asymptotically correct coverage and is asymptotically equivalent to the CI on the Kaplan-Meier estimator using Greenwood's variance. The BPCP may be inverted to create confidence procedures for a quantile of the underlying survival distribution. Because the BPCP is easy to implement, offers protection in settings when other methods fail, and essentially matches other methods when they succeed, it should be the method of choice. PMID:23632624

9. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

2014-05-01

Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

10. Accuracy in Parameter Estimation for Targeted Effects in Structural Equation Modeling: Sample Size Planning for Narrow Confidence Intervals

ERIC Educational Resources Information Center

Lai, Keke; Kelley, Ken

2011-01-01

In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…

11. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

ERIC Educational Resources Information Center

Kelley, Ken; Lai, Keke

2011-01-01

The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

12. Students' Conceptual Metaphors Influence Their Statistical Reasoning about Confidence Intervals. WCER Working Paper No. 2008-5

ERIC Educational Resources Information Center

Grant, Timothy S.; Nathan, Mitchell J.

2008-01-01

Confidence intervals are beginning to play an increasing role in the reporting of research findings within the social and behavioral sciences and, consequently, are becoming more prevalent in beginning classes in statistics and research methods. Confidence intervals are an attractive means of conveying experimental results, as they contain a…

13. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

NASA Technical Reports Server (NTRS)

Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald

2007-01-01

In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

14. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

NASA Technical Reports Server (NTRS)

Grimes-Ledesma, Lorie; Murthy, Pappu, L. N.; Phoenix, S. Leigh; Glaser, Ronald

2006-01-01

In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Ovenvrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter error are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

15. Amplitude estimation of a sine function based on confidence intervals and Bayes' theorem

Eversmann, D.; Pretz, J.; Rosenthal, M.

2016-05-01

This paper discusses the amplitude estimation using data originating from a sine-like function as probability density function. If a simple least squares fit is used, a significant bias is observed if the amplitude is small compared to its error. It is shown that a proper treatment using the Feldman-Cousins algorithm of likelihood ratios allows one to construct improved confidence intervals. Using Bayes' theorem a probability density function is derived for the amplitude. It is used in an application to show that it leads to better estimates compared to a simple least squares fit.

16. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

Zhang, Li

With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman

17. [Abnormally broad confidence intervals in logistic regression: interpretation of results of statistical programs].

PubMed

de Irala, J; Fernandez-Crehuet Navajas, R; Serrano del Castillo, A

1997-03-01

This study describes the behavior of eight statistical programs (BMDP, EGRET, JMP, SAS, SPSS, STATA, STATISTIX, and SYSTAT) when performing a logistic regression with a simulated data set that contains a numerical problem created by the presence of a cell value equal to zero. The programs respond in different ways to this problem. Most of them give a warning, although many simultaneously present incorrect results, among which are confidence intervals that tend toward infinity. Such results can mislead the user. Various guidelines are offered for detecting these problems in actual analyses, and users are reminded of the importance of critical interpretation of the results of statistical programs. PMID:9162592

18. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

NASA Technical Reports Server (NTRS)

Murphy, Patrick Charles

1985-01-01

An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

19. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.

PubMed

Greenland, Sander; Senn, Stephen J; Rothman, Kenneth J; Carlin, John B; Poole, Charles; Goodman, Steven N; Altman, Douglas G

2016-04-01

Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting. PMID:27209009

20. Confidence intervals for demographic projections based on products of random matrices.

PubMed

Heyde, C C; Cohen, J E

1985-04-01

This work is concerned with the growth of age-structured populations whose vital rates vary stochastically in time and with the provision of confidence intervals. In this paper a model Yt + 1(omega) = Xt + 1(omega) Yt(omega) is considered, where Yt is the (column) vector of the numbers of individuals in each age class at time t, X is a matrix of vital rates, and omega refers to a particular realization of the process that produces the vital rates. It is assumed that (Xi) is a stationary sequence of random matrices with nonnegative elements and that there is an integer n0 such that any product Xj + n0...Xj + 1Xj has all its elements positive with probability one. Then, under mild additional conditions, strong laws of large numbers and central limit results are obtained for the logarithms of the components of Yt. Large-sample estimators of the parameters in these limit results are derived. From these, confidence intervals on population growth and growth rates can be constructed. Various finite-sample estimators are studied numerically. The estimators are then used to study the growth of the striped bass population breeding in the Potomac River of the eastern United States. PMID:4023951

1. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

2009-06-01

When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

2. Another look at confidence intervals: Proposal for a more relevant and transparent approach

Biller, Steven D.; Oser, Scott M.

2015-02-01

The behaviors of various confidence/credible interval constructions are explored, particularly in the region of low event numbers where methods diverge most. We highlight a number of challenges, such as the treatment of nuisance parameters, and common misconceptions associated with such constructions. An informal survey of the literature suggests that confidence intervals are not always defined in relevant ways and are too often misinterpreted and/or misapplied. This can lead to seemingly paradoxical behaviors and flawed comparisons regarding the relevance of experimental results. We therefore conclude that there is a need for a more pragmatic strategy which recognizes that, while it is critical to objectively convey the information content of the data, there is also a strong desire to derive bounds on model parameter values and a natural instinct to interpret things this way. Accordingly, we attempt to put aside philosophical biases in favor of a practical view to propose a more transparent and self-consistent approach that better addresses these issues.

3. Statistical variability and confidence intervals for planar dose QA pass rates

SciTech Connect

Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B.

2011-11-15

Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

4. Safety evaluation and confidence intervals when the number of observed events is small or zero.

PubMed

Jovanovic, B D; Zalenski, R J

1997-09-01

A common objective in many clinical studies is to determine the safety of a diagnostic test or therapeutic intervention. In these evaluations, serious adverse effects are either rare or not encountered. In this setting, the estimation of the confidence interval (CI) for the unknown proportion of adverse events has special importance. When no adverse events are encountered, commonly used approximate methods for calculating CIs cannot be applied, and such information is not commonly reported. Furthermore, when only a few adverse events are encountered, the approximate methods for calculation of CIs can be applied, but are neither appropriate nor accurate. In both situations, CIs should be computed with the use of the exact binomial distribution. We discuss the need for such estimation and provide correct methods and rules of thumb for quick computations of accurate approximations of the 95% and 99.9% CIs when the observed number of adverse events is zero. PMID:9287891

5. Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.

PubMed

López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio

2016-01-01

Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550

6. BootES: an R package for bootstrap confidence intervals on effect sizes.

PubMed

Kirby, Kris N; Gerlanc, Daniel

2013-12-01

Bootstrap Effect Sizes (bootES; Gerlanc & Kirby, 2012) is a free, open-source software package for R (R Development Core Team, 2012), which is a language and environment for statistical computing. BootES computes both unstandardized and standardized effect sizes (such as Cohen's d, Hedges's g, and Pearson's r) and makes easily available for the first time the computation of their bootstrap confidence intervals (CIs). In this article, we illustrate how to use bootES to find effect sizes for contrasts in between-subjects, within-subjects, and mixed factorial designs and to find bootstrap CIs for correlations and differences between correlations. An appendix gives a brief introduction to R that will allow readers to use bootES without having prior knowledge of R. PMID:23519455

7. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

NASA Technical Reports Server (NTRS)

Murphy, P. C.; Klein, V.

1984-01-01

Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

8. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

USGS Publications Warehouse

Hill, M.C.

1989-01-01

Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

9. Confidence interval in estimating solute loads from a small forested catchment

2007-12-01

The evaluation of uncertainty in estimating mass flux (load) from catchments plays the important role in the evaluation of chemical weathering, TMDLs implementation, and so on. Loads from catchments are estimated with many methods such as weighted average, rating curve, regression model, ratio estimator, and composite method, considering the appropriate sampling strategy. Total solute loads for 10 months from a small forested catchment were calculated based on the high-temporal resolution data and used in evaluating the validity of 95% confidence intervals (CIs) of estimated loads. The effect of employing random and flow-stratified sampling methods on 95% CIs was also evaluated. Water quality data of the small forested catchment (12.8 ha) in Japan was collected every 15 minutes during 10 months in 2004 to acquire the 'true values' of solute loads. Those data were measured by the monitoring equipment using FIP (flow injection potentiometry) method with ion-selective electrodes. Measured indices were sodium, potassium, and chloride ion in the stream water. Water quantity (discharge rate) data were measured continuously by the V-notch weir at the catchment outlet. The Beale ratio estimator was employed as the estimation method of solute loads because it was known as unbiased estimator. The bootstrap method was also used for calculating the 95% confidence intervals of solute loads with 2,000 bootstrap replications. Both flow-stratified and random sampling was adopted as sampling strategy which extracted sample data sets from the entire observations. Discharge rate seemed to be a dominant factor of solute concentration because the catchment was almost undisturbed. The validity of 95% CIs were evaluated using the number of inclusion of 'true value' inside CIs out of 1,000 estimations derived from independently and iteratively extracted sample data sets. The number of samples in each data set was set to 5,500, 950, 470, 230, 40, and 20, equivalent to hourly, 6-hourly, 12

10. Confidence intervals after multiple imputation: combining profile likelihood information from logistic regressions.

PubMed

Heinze, Georg; Ploner, Meinhard; Beyea, Jan

2013-12-20

In the logistic regression analysis of a small-sized, case-control study on Alzheimer's disease, some of the risk factors exhibited missing values, motivating the use of multiple imputation. Usually, Rubin's rules (RR) for combining point estimates and variances would then be used to estimate (symmetric) confidence intervals (CIs), on the assumption that the regression coefficients were distributed normally. Yet, rarely is this assumption tested, with or without transformation. In analyses of small, sparse, or nearly separated data sets, such symmetric CI may not be reliable. Thus, RR alternatives have been considered, for example, Bayesian sampling methods, but not yet those that combine profile likelihoods, particularly penalized profile likelihoods, which can remove first order biases and guarantee convergence of parameter estimation. To fill the gap, we consider the combination of penalized likelihood profiles (CLIP) by expressing them as posterior cumulative distribution functions (CDFs) obtained via a chi-squared approximation to the penalized likelihood ratio statistic. CDFs from multiple imputations can then easily be averaged into a combined CDF c , allowing confidence limits for a parameter β  at level 1 - α to be identified as those β* and β** that satisfy CDF c (β*) = α ∕ 2 and CDF c (β**) = 1 - α ∕ 2. We demonstrate that the CLIP method outperforms RR in analyzing both simulated data and data from our motivating example. CLIP can also be useful as a confirmatory tool, should it show that the simpler RR are adequate for extended analysis. We also compare the performance of CLIP to Bayesian sampling methods using Markov chain Monte Carlo. CLIP is available in the R package logistf. PMID:23873477

11. A Confidence Interval for the Wallace Coefficient of Concordance and Its Application to Microbial Typing Methods

PubMed Central

Pinto, Francisco R.; Melo-Cristino, José; Ramirez, Mário

2008-01-01

Very diverse research fields frequently deal with the analysis of multiple clustering results, which should imply an objective detection of overlaps and divergences between the formed groupings. The congruence between these multiple results can be quantified by clustering comparison measures such as the Wallace coefficient (W). Since the measured congruence is dependent on the particular sample taken from the population, there is variability in the estimated values relatively to those of the true population. In the present work we propose the use of a confidence interval (CI) to account for this variability when W is used. The CI analytical formula is derived assuming a Gaussian sampling distribution and recurring to the algebraic relationship between W and the Simpson's index of diversity. This relationship also allows the estimation of the expected Wallace value under the assumption of independence of classifications. We evaluated the CI performance using simulated and published microbial typing data sets. The simulations showed that the CI has the desired 95% coverage when the W is greater than 0.5. This behaviour is robust to changes in cluster number, cluster size distributions and sample size. The analysis of the published data sets demonstrated the usefulness of the new CI by objectively validating some of the previous interpretations, while showing that other conclusions lacked statistical support. PMID:19002246

12. Performance analysis of complex repairable industrial systems using PSO and fuzzy confidence interval based methodology.

PubMed

Garg, Harish

2013-03-01

The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. PMID:23098922

13. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

NASA Technical Reports Server (NTRS)

Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

2008-01-01

an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

14. A method for establishing absolute full-energy peak efficiency and its confidence interval for HPGe detectors

Rizwan, U.; Chester, A.; Domingo, T.; Starosta, K.; Williams, J.; Voss, P.

2015-12-01

A method is proposed for establishing the absolute efficiency calibration of a HPGe detector including the confidence interval in the energy range of 79.6-3451.2 keV. The calibrations were accomplished with the 133Ba, 60Co, 56Co and 152Eu point-like radioactive sources with only the 60Co source being activity calibrated to an accuracy of 2% at the 90% confidence level. All data sets measured from activity calibrated and uncalibrated sources were fit simultaneously using the linearized least squares method. The proposed fit function accounts for scaling of the data taken with activity uncalibrated sources to the data taken with the high accuracy activity calibrated source. The confidence interval for the fit was found analytically using the covariance matrix. Accuracy of the fit was below 3.5% at the 90% confidence level in the 79.6-3451.2 keV energy range.

15. Constructing bootstrap confidence intervals for principal component loadings in the presence of missing data: a multiple-imputation approach.

PubMed

van Ginkel, Joost R; Kiers, Henk A L

2011-11-01

Earlier research has shown that bootstrap confidence intervals from principal component loadings give a good coverage of the population loadings. However, this only applies to complete data. When data are incomplete, missing data have to be handled before analysing the data. Multiple imputation may be used for this purpose. The question is how bootstrap confidence intervals for principal component loadings should be corrected for multiply imputed data. In this paper, several solutions are proposed. Simulations show that the proposed corrections for multiply imputed data give a good coverage of the population loadings in various situations. PMID:21973098

16. A comparison study of modal parameter confidence intervals computed using the Monte Carlo and Bootstrap techniques

SciTech Connect

Doebling, S.W.; Farrar, C.R.; Cornwell, P.J.

1998-02-01

This paper presents a comparison of two techniques used to estimate the statistical confidence intervals on modal parameters identified from measured vibration data. The first technique is Monte Carlo simulation, which involves the repeated simulation of random data sets based on the statistics of the measured data and an assumed distribution of the variability in the measured data. A standard modal identification procedure is repeatedly applied to the randomly perturbed data sets to form a statistical distribution on the identified modal parameters. The second technique is the Bootstrap approach, where individual Frequency Response Function (FRF) measurements are randomly selected with replacement to form an ensemble average. This procedure, in effect, randomly weights the various FRF measurements. These weighted averages of the FRFs are then put through the modal identification procedure. The modal parameters identified from each randomly weighted data set are then used to define a statistical distribution for these parameters. The basic difference in the two techniques is that the Monte Carlo technique requires the assumption on the form of the distribution of the variability in the measured data, while the bootstrap technique does not. Also, the Monte Carlo technique can only estimate random errors, while the bootstrap statistics represent both random and bias (systematic) variability such as that arising from changing environmental conditions. However, the bootstrap technique requires that every frequency response function be saved for each average during the data acquisition process. Neither method can account for bias introduced during the estimation of the FRFs. This study has been motivated by a program to develop vibration-based damage identification procedures.

17. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

ERIC Educational Resources Information Center

Bonett, Douglas G.; Price, Robert M.

2012-01-01

Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

18. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals

ERIC Educational Resources Information Center

Kelley, Ken

2008-01-01

Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…

19. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

ERIC Educational Resources Information Center

Ruscio, John; Mullen, Tara

2012-01-01

It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

20. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

NASA Technical Reports Server (NTRS)

Rutledge, Charles K.

1988-01-01

The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

1. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

ERIC Educational Resources Information Center

Odgaard, Eric C.; Fowler, Robert L.

2010-01-01

Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

2. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

ERIC Educational Resources Information Center

Algina, James; Keselman, H. J.

2008-01-01

Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

3. Effect size, confidence interval and statistical significance: a practical guide for biologists.

PubMed

Nakagawa, Shinichi; Cuthill, Innes C

2007-11-01

Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non

4. CONFIDENCE INTERVALS FOR A CROP YIELD LOSS FUNCTION IN NONLINEAR REGRESSION

EPA Science Inventory

Quantifying the relationship between chronic pollutant exposure and the ensuing biological response requires consideration of nonlinear functions that are flexible enough to generate a wide range of response curves. he linear approximation (i.e., Wald's) interval estimates for oz...

5. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

2003-06-01

14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

6. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

PubMed

Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

2012-08-01

This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. PMID:21764476

7. Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality

ERIC Educational Resources Information Center

Algina, James; Keselman, H. J.; Penfield, Randall D.

2010-01-01

The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…

8. Replication, "p[subscript rep]," and Confidence Intervals: Comment Prompted by Iverson, Wagenmakers, and Lee (2010); Lecoutre, Lecoutre, and Poitevineau (2010); and Maraun and Gabriel (2010)

ERIC Educational Resources Information Center

Cumming, Geoff

2010-01-01

This comment offers three descriptions of "p[subscript rep]" that start with a frequentist account of confidence intervals, draw on R. A. Fisher's fiducial argument, and do not make Bayesian assumptions. Links are described among "p[subscript rep]," "p" values, and the probability a confidence interval will capture the mean of a replication…

9. Statistical damage detection method for frame structures using a confidence interval

Li, Weiming; Zhu, Hongping; Luo, Hanbin; Xia, Yong

2010-03-01

A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.

10. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

PubMed

Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

2013-01-01

The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision. PMID:23913390

11. Solar PV power generation forecasting using hybrid intelligent algorithms and uncertainty quantification based on bootstrap confidence intervals

AlHakeem, Donna Ibrahim

This thesis focuses on short-term photovoltaic forecasting (STPVF) for the power generation of a solar PV system using probabilistic forecasts and deterministic forecasts. Uncertainty estimation, in the form of a probabilistic forecast, is emphasized in this thesis to quantify the uncertainties of the deterministic forecasts. Two hybrid intelligent models are proposed in two separate chapters to perform the STPVF. In Chapter 4, the framework of the deterministic proposed hybrid intelligent model is presented, which is a combination of wavelet transform (WT) that is a data filtering technique and a soft computing model (SCM) that is generalized regression neural network (GRNN). Additionally, this chapter proposes a model that is combined as WT+GRNN and is utilized to conduct the forecast of two random days in each season for 1-hour-ahead to find the power generation. The forecasts are analyzed utilizing accuracy measures equations to determine the model performance and compared with another SCM. In Chapter 5, the framework of the proposed model is presented, which is a combination of WT, a SCM based on radial basis function neural network (RBFNN), and a population-based stochastic particle swarm optimization (PSO). Chapter 5 proposes a model combined as a deterministic approach that is represented as WT+RBFNN+PSO, and then a probabilistic forecast is conducted utilizing bootstrap confidence intervals to quantify uncertainty from the output of WT+RBFNN+PSO. In Chapter 5, the forecasts are conducted by furthering the tests done in Chapter 4. Chapter 5 forecasts the power generation of two random days in each season for 1-hour-ahead, 3-hour-ahead, and 6-hour-ahead. Additionally, different types of days were also forecasted in each season such as a sunny day (SD), cloudy day (CD), and a rainy day (RD). These forecasts were further analyzed using accuracy measures equations, variance and uncertainty estimation. The literature that is provided supports that the proposed

12. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014).

PubMed

García-Pérez, Miguel A; Alcalá-Quintana, Rocío

2016-01-01

Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

13. Temperature dependence of the rate and activation parameters for tert-butyl chloride solvolysis: Monte Carlo simulation of confidence intervals

Sung, Dae Dong; Kim, Jong-Youl; Lee, Ikchoon; Chung, Sung Sik; Park, Kwon Ha

2004-07-01

The solvolysis rate constants ( kobs) of tert-butyl chloride are measured in 20%(v/v) 2-PrOH-H 2O mixture at 15 temperatures ranging from 0 to 39 °C. Examination of the temperature dependence of the rate constants by the weighted least squares fitting to two to four terms equations has led to the three-term form, ln kobs= a1+ a2T-1+ a3ln T, as the best expression. The activation parameters, ΔH ‡ and ΔS ‡, calculated by using three constants a1, a2 and a3 revealed the steady decrease of ≈1 kJ mol -1 per degree and 3.5 J K -1 mol -1 per degree, respectively, as the temperature rises. The sign change of ΔS ‡ at ≈20.0 °C and the large negative heat capacity of activation, ΔC p‡=-1020 J K -1 mol -1, derived are interpreted to indicate an S N1 mechanism and a net change from water structure breaking to electrostrictive solvation due to the partially ionic transition state. Confidence intervals estimated by the Monte Carlo method are far more precise than those by the conventional method.

14. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)

PubMed Central

García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

2016-01-01

Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157–1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

15. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

PubMed Central

Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

2016-01-01

Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

16. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

2011-10-01

Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

17. Effect of Minimum Cell Sizes and Confidence Interval Sizes for Special Education Subgroups on School-Level AYP Determinations. Synthesis Report 61

ERIC Educational Resources Information Center

Simpson, Mary Ann; Gong, Brian; Marion, Scott

2006-01-01

This study addresses three questions: First, considering the full group of students and the special education subgroup, what is the likely effect of minimum cell size and confidence interval size on school-level Adequate Yearly Progress (AYP) determinations? Second, what effects do the changing minimum cell sizes have on inclusion of special…

18. Confidence Interval Methods for Coefficient Alpha on the Basis of Discrete, Ordinal Response Items: Which One, If Any, Is the Best?

ERIC Educational Resources Information Center

Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M.

2011-01-01

In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…

19. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

ERIC Educational Resources Information Center

Mendoza, Jorge L.; Stafford, Karen L.

2001-01-01

Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

20. Confidence interval estimation for an empirical model quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance

Technology Transfer Automated Retrieval System (TEKTRAN)

In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...

1. Corn stover semi-mechanistic enzymatic hydrolysis model with tight parameter confidence intervals for model-based process design and optimization.

PubMed

Scott, Felipe; Li, Muyang; Williams, Daniel L; Conejeros, Raúl; Hodge, David B; Aroca, Germán

2015-02-01

Uncertainty associated to the estimated values of the parameters in a model is a key piece of information for decision makers and model users. However, this information is typically not reported or the confidence intervals are too large to be useful. A semi-mechanistic model for the enzymatic saccharification of dilute acid pretreated corn stover is proposed in this work, the model is a modification of an existing one providing a statistically significant improved fit towards a set of experimental data that includes varying initial solid loadings (10-25% w/w) and the use of the pretreatment liquor and washed solids with or without supplementation of key inhibitors. A subset of 8 out of 17 parameters was identified, showing sufficiently tight confidence intervals to be used in uncertainty propagation and model analysis, without requiring interval truncation via expert judgment. PMID:25496946

2. CONFIDENCE INTERVALS AND CURVATURE MEASURES IN NONLINEAR REGRESSION USING THE IML AND NLIN PROCEDURES IN SAS SOFTWARE

EPA Science Inventory

Interval estimates for nonlinear parameters using the linear approximation are sensitive to parameter curvature effects. he adequacy of the linear approximation (Wald) interval is determined using the nonlinearity measures of Bates and Watts (1980), and Clarke (1987b), and the pr...

3. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

ERIC Educational Resources Information Center

Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

2013-01-01

Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

4. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

ERIC Educational Resources Information Center

Paek, Insu

2016-01-01

The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

5. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

ERIC Educational Resources Information Center

Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

2009-01-01

A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

6. On the Proper Estimation of the Confidence Interval for the Design Formula of Blast-Induced Vibrations with Site Records

Yan, W. M.; Yuen, Ka-Veng

2015-01-01

Blast-induced ground vibration has received much engineering and public attention. The vibration is often represented by the peak particle velocity (PPV) and the empirical approach is employed to describe the relationship between the PPV and the scaled distance. Different statistical methods are often used to obtain the confidence level of the prediction. With a known scaled distance, the amount of explosives in a planned blast can then be determined by a blast engineer when the PPV limit and the confidence level of the vibration magnitude are specified. This paper shows that these current approaches do not incorporate the posterior uncertainty of the fitting coefficients. In order to resolve this problem, a Bayesian method is proposed to derive the site-specific fitting coefficients based on a small amount of data collected at an early stage of a blasting project. More importantly, uncertainty of both the fitting coefficients and the design formula can be quantified. Data collected from a site formation project in Hong Kong is used to illustrate the performance of the proposed method. It is shown that the proposed method resolves the underestimation problem in one of the conventional approaches. The proposed approach can be easily conducted using spreadsheet calculation without the need for any additional tools, so it will be particularly welcome by practicing engineers.

7. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

NASA Technical Reports Server (NTRS)

Murphy, P. C.

1986-01-01

An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

8. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

PubMed

Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

2015-09-01

This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. PMID:26003712

9. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

PubMed

Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

2002-06-01

Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample. PMID:12220133

10. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon.

PubMed

Conny, J M; Norris, G A; Gould, T R

2009-03-01

Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (sigma(Char)) and BC (sigma(BC)), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for sigma(BC), which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the sigma(BC) and sigma(Char) surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 degrees C and 850 degrees C as most suitable for the helium high-temperature step lasting 150s. However, a temperature as low as 750 degrees C could not be rejected statistically. PMID:19216871

11. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

SciTech Connect

ROMERO,VICENTE J.

2000-05-04

In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

12. Nuclear excitation by electron transition rate confidence interval in a Hg201 local thermodynamic equilibrium plasma

Comet, M.; Gosselin, G.; Méot, V.; Morel, P.; Pain, J.-C.; Denis-Petit, D.; Gobet, F.; Hannachi, F.; Tarisien, M.; Versteegen, M.

2015-11-01

Nuclear excitation by electron transition (NEET) is predicted to be the dominant excitation process of the first Hg201 isomeric state in a laser heated plasma. This process may occur when the energy difference between a nuclear transition and an atomic transition is close to zero, provided the quantum selection rules are fulfilled. At local thermodynamic equilibrium, an average atom model may be used, in a first approach, to evaluate the NEET rate in plasma. The statistical nature of the electronic transition spectrum is then described by the means of a Gaussian distribution around the average atom configuration. However, using a continuous function to describe the electronic spectrum is questionable in the framework of a resonant process, such as NEET. In order to get an idea of when it can be relied upon to predict a NEET rate in plasma, we present in this paper a NEET rate calculation using a model derived from detailed configuration accounting. This calculation allows us to define a confidence interval of the NEET rate around its average atom mean value, which is the first step to design a future experiment.

13. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

PubMed

Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

2015-05-01

Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. PMID:25817475

14. Confidence bounds on structural reliability

NASA Technical Reports Server (NTRS)

Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

1993-01-01

Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

15. Confidant Relations in Italy.

PubMed

Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

2015-02-01

Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

16. Confidant Relations in Italy

PubMed Central

Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

2015-01-01

Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

17. Interval Estimates of Multivariate Effect Sizes: Coverage and Interval Width Estimates under Variance Heterogeneity and Nonnormality

ERIC Educational Resources Information Center

Hess, Melinda R.; Hogarty, Kristine Y.; Ferron, John M.; Kromrey, Jeffrey D.

2007-01-01

Monte Carlo methods were used to examine techniques for constructing confidence intervals around multivariate effect sizes. Using interval inversion and bootstrapping methods, confidence intervals were constructed around the standard estimate of Mahalanobis distance (D[superscript 2]), two bias-adjusted estimates of D[superscript 2], and Huberty's…

18. Application of Sequential Interval Estimation to Adaptive Mastery Testing

ERIC Educational Resources Information Center

Chang, Yuan-chin Ivan

2005-01-01

In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

19. Interval estimates and their precision

Marek, Luboš; Vrabec, Michal

2015-06-01

A task very often met in in practice is computation of confidence interval bounds for the relative frequency within sampling without replacement. A typical situation includes preelection estimates and similar tasks. In other words, we build the confidence interval for the parameter value M in the parent population of size N on the basis of a random sample of size n. There are many ways to build this interval. We can use a normal or binomial approximation. More accurate values can be looked up in tables. We consider one more method, based on MS Excel calculations. In our paper we compare these different methods for specific values of M and we discuss when the considered methods are suitable. The aim of the article is not a publication of new theoretical methods. This article aims to show that there is a very simple way how to compute the confidence interval bounds without approximations, without tables and without other software costs.

20. Interval Training.

ERIC Educational Resources Information Center

President's Council on Physical Fitness and Sports, Washington, DC.

Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

ERIC Educational Resources Information Center

Sander, Paul; Sanders, Lalage

2006-01-01

This paper draws on the psychological theories of self-efficacy and the self-concept to understand students' self-confidence in academic study in higher education as measured by the Academic Behavioural Confidence scale (ABC). In doing this, expectancy-value theory and self-efficacy theory are considered and contrasted with self-concept and…

2. Confidence Intervals for Standardized Linear Contrasts of Means

ERIC Educational Resources Information Center

Bonnett, Douglas G.

2008-01-01

Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…

3. Technological Pedagogical Content Knowledge (TPACK) Literature Using Confidence Intervals

ERIC Educational Resources Information Center

Young, Jamaal R.; Young, Jemimah L.; Shaker, Ziad

2012-01-01

The validity and reliability of Technological Pedagogical Content Knowledge (TPACK) as a framework to measure the extent to which teachers can teach with technology hinges on the ability to aggregate results across empirical studies. The results of data collected using the survey of pre-service teacher knowledge of teaching with technology (TKTT)…

4. Epidemiology and the law: courts and confidence intervals.

PubMed Central

Christoffel, T; Teret, S P

1991-01-01

Beginning with the swine flu litigation of the early 1980s, epidemiological evidence has played an increasingly prominent role in helping the nation's courts deal with alleged causal connections between plaintiffs' diseases or other harm and exposure to specific noxious agents (such as asbestos, toxic waste, radiation, and pharmaceuticals). Judicial reliance on epidemiology has high-lighted the contrast between the nature of scientific proof and of legal proof. Epidemiologists need to recognize and understand the growing involvement of their profession in complex tort litigation. PMID:1746668

5. Estimation of Confidence Intervals for Multiplication and Efficiency

SciTech Connect

Verbeke, J

2009-07-17

Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoretical count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.

6. Combining one-sample confidence procedures for inference in the two-sample case.

PubMed

Fay, Michael P; Proschan, Michael A; Brittain, Erica

2015-03-01

We present a simple general method for combining two one-sample confidence procedures to obtain inferences in the two-sample problem. Some applications give striking connections to established methods; for example, combining exact binomial confidence procedures gives new confidence intervals on the difference or ratio of proportions that match inferences using Fisher's exact test, and numeric studies show the associated confidence intervals bound the type I error rate. Combining exact one-sample Poisson confidence procedures recreates standard confidence intervals on the ratio, and introduces new ones for the difference. Combining confidence procedures associated with one-sample t-tests recreates the Behrens-Fisher intervals. Other applications provide new confidence intervals with fewer assumptions than previously needed. For example, the method creates new confidence intervals on the difference in medians that do not require shift and continuity assumptions. We create a new confidence interval for the difference between two survival distributions at a fixed time point when there is independent censoring by combining the recently developed beta product confidence procedure for each single sample. The resulting interval is designed to guarantee coverage regardless of sample size or censoring distribution, and produces equivalent inferences to Fisher's exact test when there is no censoring. We show theoretically that when combining intervals asymptotically equivalent to normal intervals, our method has asymptotically accurate coverage. Importantly, all situations studied suggest guaranteed nominal coverage for our new interval whenever the original confidence procedures themselves guarantee coverage. PMID:25274182

7. Combining One-Sample Confidence Procedures for Inference in the Two-Sample Case

PubMed Central

Fay, Michael P.; Proschan, Michael A.; Brittain, Erica

2016-01-01

Summary We present a simple general method for combining two one-sample confidence procedures to obtain inferences in the two-sample problem. Some applications give striking connections to established methods; for example, combining exact binomial confidence procedures gives new confidence intervals on the difference or ratio of proportions that match inferences using Fisher’s exact test, and numeric studies show the associated confidence intervals bound the type I error rate. Combining exact one-sample Poisson confidence procedures recreates standard confidence intervals on the ratio, and introduces new ones for the difference. Combining confidence procedures associated with one-sample t-tests recreates the Behrens-Fisher intervals. Other applications provide new confidence intervals with fewer assumptions than previously needed. For example, the method creates new confidence intervals on the difference in medians that do not require shift and continuity assumptions. We create a new confidence interval for the difference between two survival distributions at a fixed time point when there is independent censoring by combining the recently developed beta product confidence procedure for each single sample. The resulting interval is designed to guarantee coverage regardless of sample size or censoring distribution, and produces equivalent inferences to Fisher’s exact test when there is no censoring. We show theoretically that when combining intervals asymptotically equivalent to normal intervals, our method has asymptotically accurate coverage. Importantly, all situations studied suggest guaranteed nominal coverage for our new interval whenever the original confidence procedures themselves guarantee coverage. PMID:25274182

8. Neurophysiology of perceived confidence.

PubMed

Graziano, Martin; Parra, Lucas C; Sigman, Mariano

2010-01-01

In a partial report paradigm, subjects observe during a brief presentation a cluttered field and after some time - typically ranging from 100 ms to a second - are asked to report a subset of the presented elements. A vast buffer of information is transiently available to be broadcasted which, if not retrieved in time, fades rapidly without reaching consciousness. An interesting feature of this experiment is that objective performance and subjective confidence is decoupled. This converts this paradigm in an ideal vehicle to understand the brain dynamics of the construction of confidence. Here we report a high-density EEG experiment in which we infer elements of the EEG response which are indicative of subjective confidence. We find that an early response during encoding partially correlates with perceived confidence. However, the bulk of the weight of subjective confidence is determined during a late, N400-like waveform, during the retrieval stage. This shows that we can find markers of access to internal, subjective states, that are uncoupled from objective response and stimulus properties of the task, and we propose that this can be used with decoding methods of EEG to infer subjective mental states. PMID:21096220

9. Confidence Calculation with AMV+

SciTech Connect

Fossum, A.F.

1999-02-19

The iterative advanced mean value algorithm (AMV+), introduced nearly ten years ago, is now widely used as a cost-effective probabilistic structural analysis tool when the use of sampling methods is cost prohibitive (Wu et al., 1990). The need to establish confidence bounds on calculated probabilities arises because of the presence of uncertainties in measured means and variances of input random variables. In this paper an algorithm is proposed that makes use of the AMV+ procedure and analytically derived probability sensitivities to determine confidence bounds on calculated probabilities.

10. Interbirth intervals

PubMed Central

Haig, David

2014-01-01

Background and objectives: Interbirth intervals (IBIs) mediate a trade-off between child number and child survival. Life history theory predicts that the evolutionarily optimal IBI differs for different individuals whose fitness is affected by how closely a mother spaces her children. The objective of the article is to clarify these conflicts and explore their implications for public health. Methodology: Simple models of inclusive fitness and kin conflict address the evolution of human birth-spacing. Results: Genes of infants generally favor longer intervals than genes of mothers, and infant genes of paternal origin generally favor longer IBIs than genes of maternal origin. Conclusions and implications: The colonization of maternal bodies by offspring cells (fetal microchimerism) raises the possibility that cells of older offspring could extend IBIs by interfering with the implantation of subsequent embryos. PMID:24480612

11. Predicting Systemic Confidence

ERIC Educational Resources Information Center

Falke, Stephanie Inez

2009-01-01

Using a mixed method approach, this study explored which educational factors predicted systemic confidence in master's level marital and family therapy (MFT) students, and whether or not the impact of these factors was influenced by student beliefs and their perception of their supervisor's beliefs about the value of systemic practice. One hundred…

12. SystemConfidence

Energy Science and Technology Software Center (ESTSC)

2012-09-25

SystemConfidence is a benchmark developed at ORNL which can measure statistical variation in which the user can plot. The portions of the code which manage the collection of the histograms and computing statistics on the histograms were designed with the intent that we could use these functions in other codes.

13. Computing Graphical Confidence Bounds

NASA Technical Reports Server (NTRS)

Mezzacappa, M. A.

1983-01-01

Approximation for graphical confidence bounds is simple enough to run on programmable calculator. Approximation is used in lieu of numerical tables not always available, and exact calculations, which often require rather sizable computer resources. Approximation verified for collection of up to 50 data points. Method used to analyze tile-strength data on Space Shuttle thermal-protection system.

ERIC Educational Resources Information Center

Goodson, Ludwika Aniela; Slater, Don; Zubovic, Yvonne

2015-01-01

A "knowledge survey" and a formative evaluation process led to major changes in an instructor's course and teaching methods over a 5-year period. Design of the survey incorporated several innovations, including: a) using "confidence survey" rather than "knowledge survey" as the title; b) completing an…

15. Reliability and Confidence.

ERIC Educational Resources Information Center

Test Service Bulletin, 1952

1952-01-01

Some aspects of test reliability are discussed. Topics covered are: (1) how high should a reliability coefficient be?; (2) two factors affecting the interpretation of reliability coefficients--range of talent and interval between testings; (3) some common misconceptions--reliability of speed tests, part vs. total reliability, reliability for what…

PubMed

Kelley, Tom; Kelley, David

2012-12-01

Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with. PMID:23227579

17. Confidence bounds for nonlinear dose-response relationships.

PubMed

Baayen, C; Hougaard, P

2015-11-30

An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve. It is well known that Wald confidence intervals are based on linear approximations and are often unsatisfactory in nonlinear models. Apart from incorrect coverage rates, they can be unreasonable in the sense that the lower confidence limit of the difference to placebo can be negative, even when an overall test shows a significant positive effect. Bootstrap confidence intervals solve many of the problems of the Wald confidence intervals but are computationally intensive and prone to undercoverage for small sample sizes. In this work, we propose a profile likelihood approach to compute confidence intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated using a public dataset and simulations based on the Emax and sigmoid Emax models. PMID:26112765

18. Simulation integration with confidence

Strelich, Tom; Stalcup, Bruce W.

1999-07-01

Current financial, schedule and risk constraints mandate reuse of software components when building large-scale simulations. While integration of simulation components into larger systems is a well-understood process, it is extremely difficult to do while ensuring that the results are correct. Illgen Simulation Technologies Incorporated and Litton PRC have joined forces to provide tools to integrate simulations with confidence. Illgen Simulation Technologies has developed an extensible and scaleable, n-tier, client- server, distributed software framework for integrating legacy simulations, models, tools, utilities, and databases. By utilizing the Internet, Java, and the Common Object Request Brokering Architecture as the core implementation technologies, the framework provides built-in scalability and extensibility.

19. Improved investor confidence

SciTech Connect

Anderson, J.

1995-10-01

Results of a financial ranking survey of power projects show reasonably strong activity when compared to previous surveys. Perhaps the most notable trend is the continued increase in the number of international deals being reported. Nearly 62 percent of the transactions reported were for non-US projects. This increase will likely expand with time as developers and lenders gain confidence in certain regions. For the remainder of 1995 and into 1996 it is likely that financial activity will continue at a steady pace. A number of projects in various markets are poised to reach financial close relatively soon. Developers, investment bankers, and governments are all gaining experience and becoming more comfortable with the process.

20. Optimally combined confidence limits

Janot, P.; Le Diberder, F.

1998-02-01

An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.

1. Ellenore Flood's Skills Confidence Inventory.

ERIC Educational Resources Information Center

Subich, Linda Mezydlo

1998-01-01

Presents background information on the Skills Confidence Inventory (SCI) and the construct it assesses. Interprets the skills confidence and interest profiles of a 29-year-old female high school teacher using the SCI and the Strong Interest Inventory. (MKA)

2. Confidence in Numerical Simulations

SciTech Connect

Hemez, Francois M.

2015-02-23

This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

3. Confidence and Cognitive Test Performance

ERIC Educational Resources Information Center

Stankov, Lazar; Lee, Jihyun

2008-01-01

This article examines the nature of confidence in relation to abilities, personality, and metacognition. Confidence scores were collected during the administration of Reading and Listening sections of the Test of English as a Foreign Language Internet-Based Test (TOEFL iBT) to 824 native speakers of English. Those confidence scores were correlated…

4. Monitoring tigers with confidence.

PubMed

Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark

2010-12-01

With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. PMID:21392352

5. A comparison of approximate interval estimators for the Bernoulli parameter

NASA Technical Reports Server (NTRS)

Leemis, Lawrence; Trivedi, Kishor S.

1993-01-01

The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

6. Why Aren't They Called Probability Intervals?

ERIC Educational Resources Information Center

Devlin, Thomas F.

2008-01-01

This article offers suggestions for teaching confidence intervals, a fundamental statistical tool often misinterpreted by beginning students. A historical perspective presenting the interpretation given by their inventor is supported with examples and the use of technology. A method for determining confidence intervals for the seldom-discussed…

7. Confidence limits and their errors

SciTech Connect

Rajendran Raja

2002-03-22

Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

8. Measuring Vaccine Confidence: Introducing a Global Vaccine Confidence Index

PubMed Central

Larson, Heidi J; Schulz, William S; Tucker, Joseph D; Smith, David M D

2015-01-01

Background. Public confidence in vaccination is vital to the success of immunisation programmes worldwide. Understanding the dynamics of vaccine confidence is therefore of great importance for global public health. Few published studies permit global comparisons of vaccination sentiments and behaviours against a common metric. This article presents the findings of a multi-country survey of confidence in vaccines and immunisation programmes in Georgia, India, Nigeria, Pakistan, and the United Kingdom (UK) – these being the first results of a larger project to map vaccine confidence globally. Methods. Data were collected from a sample of the general population and from those with children under 5 years old against a core set of confidence questions. All surveys were conducted in the relevant local-language in Georgia, India, Nigeria, Pakistan, and the UK. We examine confidence in immunisation programmes as compared to confidence in other government health services, the relationships between confidence in the system and levels of vaccine hesitancy, reasons for vaccine hesitancy, ultimate vaccination decisions, and their variation based on country contexts and demographic factors. Results. The numbers of respondents by country were: Georgia (n=1000); India (n=1259); Pakistan (n=2609); UK (n=2055); Nigerian households (n=12554); and Nigerian health providers (n=1272). The UK respondents with children under five years of age were more likely to hesitate to vaccinate, compared to other countries. Confidence in immunisation programmes was more closely associated with confidence in the broader health system in the UK (Spearman’s ρ=0.5990), compared to Nigeria (ρ=0.5477), Pakistan (ρ=0.4491), and India (ρ=0.4240), all of which ranked confidence in immunisation programmes higher than confidence in the broader health system. Georgia had the highest rate of vaccine refusals (6 %) among those who reported initial hesitation. In all other countries surveyed most

9. Comparison of confidence procedures for type I censored exponential lifetimes.

PubMed

Sundberg, R

2001-12-01

In the model of type I censored exponential lifetimes, coverage probabilities are compared for a number of confidence interval constructions proposed in literature. The coverage probabilities are calculated exactly for sample sizes up to 50 and for different degrees of censoring and different degrees of intended confidence. If not only a fair two-sided coverage is desired, but also fair one-sided coverages, only few methods are quite satisfactory. A likelihood-based interval and a third root transformation to normality work almost perfectly, but the chi 2-based method that is perfect under no censoring and under type II censoring can also be advocated. PMID:11763546

10. Confidant Relations of the Aged.

ERIC Educational Resources Information Center

Tigges, Leann M.; And Others

The confidant relationship is a qualitatively distinct dimension of the emotional support system of the aged, yet the composition of the confidant network has been largely neglected in research on aging. Persons (N=940) 60 years of age and older were interviewed about their socio-environmental setting. From the enumeration of their relatives,…

11. Predicting confidence in flashbulb memories.

PubMed

Day, Martin V; Ross, Michael

2014-01-01

Years after a shocking news event many people confidently report details of their flashbulb memories (e.g., what they were doing). People's confidence is a defining feature of their flashbulb memories, but it is not well understood. We tested a model that predicted confidence in flashbulb memories. In particular we examined whether people's social bond with the target of a news event predicts confidence. At a first session shortly after the death of Michael Jackson participants reported their sense of attachment to Michael Jackson, as well as their flashbulb memories and emotional and other reactions to Jackson's death. At a second session approximately 18 months later they reported their flashbulb memories and confidence in those memories. Results supported our proposed model. A stronger sense of attachment to Jackson was related to reports of more initial surprise, emotion, and rehearsal during the first session. Participants' bond with Michael Jackson predicted their confidence but not the consistency of their flashbulb memories 18 months later. We also examined whether participants' initial forecasts regarding the persistence of their flashbulb memories predicted the durability of their memories. Participants' initial forecasts were more strongly related to participants' subsequent confidence than to the actual consistency of their memories. PMID:23496003

12. Confidence rating for eutrophication assessments.

PubMed

Brockmann, Uwe H; Topcu, Dilek H

2014-05-15

Confidence of monitoring data is dependent on their variability and representativeness of sampling in space and time. Whereas variability can be assessed as statistical confidence limits, representative sampling is related to equidistant sampling, considering gradients or changing rates at sampling gaps. By the proposed method both aspects are combined, resulting in balanced results for examples of total nitrogen concentrations in the German Bight/North Sea. For assessing sampling representativeness surface areas, vertical profiles and time periods are divided into regular sections for which individually the representativeness is calculated. The sums correspond to the overall representativeness of sampling in the defined area/time period. Effects of not sampled sections are estimated along parallel rows by reducing their confidence, considering their distances to next sampled sections and the interrupted gradients/changing rates. Confidence rating of time sections is based on maximum differences of sampling rates at regular time steps and related means of concentrations. PMID:24680718

13. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

ERIC Educational Resources Information Center

Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

2013-01-01

Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

14. QT interval in anorexia nervosa.

PubMed Central

Cooke, R A; Chambers, J B; Singh, R; Todd, G J; Smeeton, N C; Treasure, J; Treasure, T

1994-01-01

OBJECTIVES--To determine the incidence of a long QT interval as a marker for sudden death in patients with anorexia nervosa and to assess the effect of refeeding. To define a long QT interval by linear regression analysis and estimation of the upper limit of the confidence interval (95% CI) and to compare this with the commonly used Bazett rate correction formula. DESIGN--Prospective case control study. SETTING--Tertiary referral unit for eating disorders. SUBJECTS--41 consecutive patients with anorexia nervosa admitted over an 18 month period. 28 age and sex matched normal controls. MAIN OUTCOME MEASURES--maximum QT interval measured on 12 lead electrocardiograms. RESULTS--43.6% of the variability in the QT interval was explained by heart rate alone (p < 0.00001) and group analysis contributed a further 5.9% (p = 0.004). In 6 (15%) patients the QT interval was above the upper limit of the 95% CI for the prediction based on the control equation (NS). Two patients died suddenly; both had a QT interval at or above the upper limit of the 95% CI. In patients who reached their target weights the QT interval was significantly shorter (median 9.8 ms; p = 0.04) relative to the upper limit of the 60% CI of the control regression line, which best discriminated between patients and controls. The median Bazett rate corrected QT interval (QTc) in patients and controls was 435 v 405 ms.s-1/2 (p = 0.0004), and before and after refeeding it was 435 v 432 ms.s1/2 (NS). In 14(34%) patients and three (11%) controls the QTc was > 440 ms.s-1/2 (p = 0.053). CONCLUSIONS--The QT interval was longer in patients with anorexia nervosa than in age and sex matched controls, and there was a significant tendency to reversion to normal after refeeding. The Bazett rate correction formula overestimated the number of patients with QT prolongation and also did not show an improvement with refeeding. PMID:8068473

15. What Confidence Should Boards Give No-Confidence Votes?

ERIC Educational Resources Information Center

MacTaggart, Terrence

2012-01-01

As boards and presidents are increasingly in the vanguard of change that disturbs the status quo, they may also find themselves the targets of expressions of concern, censure, and no confidence from faculty members who may be averse to a new order of things or to the manner of bringing it about. Since presidents or other chief executives are…

16. Targeting Low Career Confidence Using the Career Planning Confidence Scale

ERIC Educational Resources Information Center

McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

2006-01-01

The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

17. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making

PubMed Central

2015-01-01

Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people’s confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality. PMID:26517475

18. Addressing the vaccine confidence gap.

PubMed

Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott

2011-08-01

Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines. PMID:21664679

19. Confidence-Based Feature Acquisition

NASA Technical Reports Server (NTRS)

Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

2010-01-01

Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

20. Overconfidence in Interval Estimates: What Does Expertise Buy You?

ERIC Educational Resources Information Center

McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

2008-01-01

People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

1. Test-Retest Reliability and Concurrent Validity of the Expanded Skills Confidence Inventory

ERIC Educational Resources Information Center

Robinson, Carrie H.; Betz, Nancy E.

2004-01-01

This study examined the test-retest reliability and the concurrent validity of the 17-scale Expanded Skills Confidence Inventory in samples of 321 and 175 college students. Retest values over a 3-week interval ranged from .77 to .89, with a median of .85. Using Brown and Gore's C-index, evidence for the concurrent validity of confidence score…

2. Cultural Influences on Confidence: Country and Gender.

ERIC Educational Resources Information Center

Lundeberg, Mary A.; Fox, Paul W.; Brown, Amy C.; Elbedour, Salman

2000-01-01

Investigates gender differences in confidence judgments when they were correct and incorrect on exam items with postsecondary students (N=551) in five countries. Large and significant differences were found in overall confidence, confidence when correct, and confidence when wrong, associated primarily with country and culture. In contrast, gender…

3. Programming with Intervals

Matsakis, Nicholas D.; Gross, Thomas R.

Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

4. The integrated model of sport confidence: a canonical correlation and mediational analysis.

PubMed

Koehn, Stefan; Pearce, Alan J; Morris, Tony

2013-12-01

The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand. PMID:24334324

5. Confidence in ASCI scientific simulations

SciTech Connect

Ang, J.A.; Trucano, T.G.; Luginbuhl, D.R.

1998-06-01

The US Department of Energys (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI programs examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.

6. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

PubMed Central

Taylor, Douglas J.; Muller, Keith E.

2013-01-01

The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting uncertainty associated with such point estimates. Previous authors studied an asymptotically unbiased method of obtaining confidence intervals for noncentrality and power of the general linear univariate model in this setting. We provide exact confidence intervals for noncentrality, power, and sample size. Such confidence intervals, particularly one-sided intervals, help in planning a future study and in evaluating existing studies. PMID:24039272

7. Interval polynomial positivity

NASA Technical Reports Server (NTRS)

Bose, N. K.; Kim, K. D.

1989-01-01

It is shown that a univariate interval polynomial is globally positive if and only if two extreme polynomials are globally positive. It is shown that the global positivity property of a bivariate interval polynomial is completely determined by four extreme bivariate polynomials. The cardinality of the determining set for k-variate interval polynomials is 2k. One of many possible generalizations, where vertex implication for global positivity holds, is made by considering the parameter space to be the set dual of a boxed domain.

8. A Mathematical Framework for Statistical Decision Confidence.

PubMed

Hangya, Balázs; Sanders, Joshua I; Kepecs, Adam

2016-09-01

Decision confidence is a forecast about the probability that a decision will be correct. From a statistical perspective, decision confidence can be defined as the Bayesian posterior probability that the chosen option is correct based on the evidence contributing to it. Here, we used this formal definition as a starting point to develop a normative statistical framework for decision confidence. Our goal was to make general predictions that do not depend on the structure of the noise or a specific algorithm for estimating confidence. We analytically proved several interrelations between statistical decision confidence and observable decision measures, such as evidence discriminability, choice, and accuracy. These interrelationships specify necessary signatures of decision confidence in terms of externally quantifiable variables that can be empirically tested. Our results lay the foundations for a mathematically rigorous treatment of decision confidence that can lead to a common framework for understanding confidence across different research domains, from human and animal behavior to neural representations. PMID:27391683

9. Reducing overconfidence in the interval judgments of experts.

PubMed

Speirs-Bridge, Andrew; Fidler, Fiona; McBride, Marissa; Flander, Louisa; Cumming, Geoff; Burgman, Mark

2010-03-01

Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3-point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4-step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta-analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)-a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4-step procedure is more likely to reduce overconfidence than the 3-point procedure (Cohen's d = 0.61, [0.04, 1.18]). PMID:20030766

10. Item-Specific Gender Differences in Confidence.

ERIC Educational Resources Information Center

Foote, Chandra J.

Very little research has been performed which examines gender differences in confidence in highly specified situations. More generalized studies consistently suggest that women are less confident than men (i.e. Sadker and Sadker, 1994). The few studies of gender differences in item-specific conditions indicate that men tend to be more confident in…

11. Confidence in Science: The Gender Gap.

ERIC Educational Resources Information Center

Fox, Mary Frank; Firebaugh, Glenn

1992-01-01

Analyses relationship between gender and confidence in science. Argues that, as women form larger part of labor force and tax base, scientific fields must seek to increase women's generally lower levels of confidence in science. Reports no change in trend of confidence in science between 1973 and 1989, but shows significant and widening gap…

12. Interval neural networks

SciTech Connect

Patil, R.B.

1995-05-01

Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

13. Dynamics of postdecisional processing of confidence.

PubMed

Yu, Shuli; Pleskac, Timothy J; Zeigenfuse, Matthew D

2015-04-01

Most cognitive theories assume that confidence and choice happen simultaneously and are based on the same information. The 3 studies presented in this article instead show that confidence judgments can arise, at least in part, from a postdecisional evidence accumulation process. As a result of this process, increasing the time between making a choice and confidence judgment improves confidence resolution. This finding contradicts the notion that confidence judgments are biased by decision makers seeking confirmatory evidence. Further analysis reveals that the improved resolution is due to a reduction in confidence in incorrect responses, while confidence in correct responses remains relatively constant. These results are modeled with a sequential sampling process that allows evidence accumulation to continue after a choice is made and maps the amount of accumulated evidence onto a confidence rating. The cognitive modeling analysis reveals that the rate of evidence accumulation following a choice does slow relative to the rate preceding choice. The analysis also shows that the asymmetry between confidence in correct and incorrect choices is compatible with state-dependent decay in the accumulated evidence: Evidence consistent with the current state results in a deceleration of accumulated evidence and consequently evidence appears to have a decreasing impact on observed confidence. In contrast, evidence inconsistent with the current state results in an acceleration of accumulated evidence toward the opposite direction and consequently evidence appears to have an increasing impact on confidence. Taken together, this process-level understanding of confidence suggests a simple strategy for improving confidence accuracy: take a bit more time to make confidence judgments. PMID:25844627

14. Proper Interval Vertex Deletion

Villanger, Yngve

Deleting a minimum number of vertices from a graph to obtain a proper interval graph is an NP-complete problem. At WG 2010 van Bevern et al. gave an O((14k + 14) k + 1 kn 6) time algorithm by combining iterative compression, branching, and a greedy algorithm. We show that there exists a simple greedy O(n + m) time algorithm that solves the Proper Interval Vertex Deletion problem on \\{claw,net,allowbreak tent,allowbreak C_4,C_5,C_6\\}-free graphs. Combining this with branching on the forbidden structures claw,net,tent,allowbreak C_4,C_5, and C 6 enables us to get an O(kn 6 6 k ) time algorithm for Proper Interval Vertex Deletion, where k is the number of deleted vertices.

15. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

ERIC Educational Resources Information Center

Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

2008-01-01

Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

16. Considering Teaching History and Calculating Confidence Intervals in Student Evaluations of Teaching Quality

ERIC Educational Resources Information Center

Fraile, Rubén; Bosch-Morell, Francisco

2015-01-01

Lecturer promotion and tenure decisions are critical both for university management and for the affected lecturers. Therefore, they should be made cautiously and based on reliable information. Student evaluations of teaching quality are among the most used and analysed sources of such information. However, to date little attention has been paid in…

17. Improving Content Validation Studies Using an Asymmetric Confidence Interval for the Mean of Expert Ratings

ERIC Educational Resources Information Center

Penfield, Randall D.; Miller, Jeffrey M.

2004-01-01

As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

18. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature

ERIC Educational Resources Information Center

2016-01-01

Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…

19. Joint one-sided and two-sided simultaneous confidence intervals.

PubMed

Braat, S; Gerhard, D; Hothorn, L A

2008-01-01

For the analysis of multiarmed clinical trials often a set consisting of a mixture of one- and two-sided tests can be preferred over a set of common two-sided hypotheses settings. Here we show the straightforward application of existing multiple comparison procedures for the difference and ratio of normally distributed means to complex trial designs, involving one and two test directions. The proposed contrast tests provide a more flexible framework than the existing methods at nearly similar power. An application is illustrated for an example with multiple treatment doses and two active controls; statistical software codes are included for R and SAS System. PMID:18327722

20. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

ERIC Educational Resources Information Center

Tryon, Warren W.; Lewis, Charles

2008-01-01

Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

1. Bootstrap Standard Error and Confidence Intervals for the Correlation Corrected for Range Restriction: A Simulation Study

ERIC Educational Resources Information Center

Chan, Wai; Chan, Daniel W.-L.

2004-01-01

The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ?, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, r-sub(c), has been recommended, and a standard formula based on asymptotic results for estimating its standard…

2. Bootstrap Standard Error and Confidence Intervals for the Difference between Two Squared Multiple Correlation Coefficients

ERIC Educational Resources Information Center

Chan, Wai

2009-01-01

A typical question in multiple regression analysis is to determine if a set of predictors gives the same degree of predictor power in two different populations. Olkin and Finn (1995) proposed two asymptotic-based methods for testing the equality of two population squared multiple correlations, [rho][superscript 2][subscript 1] and…

3. Reliability Generalization: The Importance of Considering Sample Specificity, Confident Intervals, and Subgroup Differences.

ERIC Educational Resources Information Center

Onwuegbuzie, Anthony J.; Daniel, Larry G.

The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…

4. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

ERIC Educational Resources Information Center

Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

2011-01-01

Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

5. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

ERIC Educational Resources Information Center

Capraro, Robert M.

2004-01-01

With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

6. A recipe for the construction of confidence limits

SciTech Connect

Iain A Bertram et al.

2000-04-12

In this note, the authors present the recipe recommended by the Search Limits Committee for the construction of confidence intervals for the use of D0 collaboration. In another note, currently in preparation, they present the rationale for this recipe, a critique of the current literature on this topic, and several examples of the use of the method. This note is intended to fill the need of the collaboration to have a reference available until the more complete note is finished. Section 2 introduces the notation used in this note, and Section 3 contains the suggested recipe.

7. Preservice Educators' Confidence in Addressing Sexuality Education

ERIC Educational Resources Information Center

Wyatt, Tammy Jordan

2009-01-01

This study examined 328 preservice educators' level of confidence in addressing four sexuality education domains and 21 sexuality education topics. Significant differences in confidence levels across the four domains were found for gender, academic major, sexuality education philosophy, and sexuality education knowledge. Preservice educators…

8. Gender, Family Structure, and Adolescents' Primary Confidants

ERIC Educational Resources Information Center

Nomaguchi, Kei M.

2008-01-01

Using data from the National Longitudinal Survey of Youth 1997 (N = 4,190), this study examined adolescents' reports of primary confidants. Results showed that nearly 30% of adolescents aged 16-18 nominated mothers as primary confidants, 25% nominated romantic partners, and 20% nominated friends. Nominating romantic partners or friends was related…

9. Examining Response Confidence in Multiple Text Tasks

ERIC Educational Resources Information Center

List, Alexandra; Alexander, Patricia A.

2015-01-01

Students' confidence in their responses to a multiple text-processing task and their justifications for those confidence ratings were investigated. Specifically, 215 undergraduates responded to two academic questions, differing by type (i.e., discrete and open-ended) and by domain (i.e., developmental psychology and astrophysics), using a digital…

10. Self-Confidence and Metacognitive Processes

ERIC Educational Resources Information Center

Kleitman, Sabina; Stankov, Lazar

2007-01-01

This paper examines the nature of the Self-confidence factor. In particular, we study the relationship between this factor and cognitive, metacognitive, and personality measures. Participants (N=296) were administered a battery of seven cognitive tests that assess three constructs: accuracy, speed, and confidence. Participants were also given the…