Sample records for standard error analysis

  1. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  2. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  3. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  4. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  5. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  6. Comparative study of standard space and real space analysis of quantitative MR brain data.

    PubMed

    Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M

    2011-06-01

    To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.

  7. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  8. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    ERIC Educational Resources Information Center

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  9. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  10. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  11. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  12. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  13. Errors in quantitative backscattered electron analysis of bone standardized by energy-dispersive x-ray spectrometry.

    PubMed

    Vajda, E G; Skedros, J G; Bloebaum, R D

    1998-10-01

    Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.

  14. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  15. Translating Radiometric Requirements for Satellite Sensors to Match International Standards.

    PubMed

    Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong

    2014-01-01

    International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.

  16. Translating Radiometric Requirements for Satellite Sensors to Match International Standards

    PubMed Central

    Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong

    2014-01-01

    International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032

  17. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  18. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  19. Ad hoc versus standardized admixtures for continuous infusion drugs in neonatal intensive care: cognitive task analysis of safety at the bedside.

    PubMed

    Brannon, Timothy S

    2006-01-01

    Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside.

  20. Ad Hoc versus Standardized Admixtures for Continuous Infusion Drugs in Neonatal Intensive Care: Cognitive Task Analysis of Safety at the Bedside

    PubMed Central

    Brannon, Timothy S.

    2006-01-01

    Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside. PMID:17238482

  1. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  3. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  4. Multicollinearity and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  5. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  6. End-of-Kindergarten Spelling Outcomes: How Can Spelling Error Analysis Data Inform Beginning Reading Instruction?

    PubMed

    Lee, Julia Ai Cheng; Otaiba, Stephanie Al

    2017-01-01

    In this article, the authors examined the spelling performance of 430 kindergarteners, which included a high risk sample, to determine the relations between end of kindergarten reading and spelling in a high quality language arts setting. The spelling outcomes including the spelling errors between the good and the poor readers were described, analyzed, and compared. The findings suggest that not all the children have acquired the desired standard as outlined by the Common Core State Standards. In addition, not every good reader is a good speller and that not every poor speller is a poor reader. The study shows that spelling tasks that are accompanied by spelling errors analysis provide a powerful window for making instructional sense of children's spelling errors and for individualizing spelling instructional strategies.

  7. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  8. Analyzing Multilevel Data: An Empirical Comparison of Parameter Estimates of Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2011-01-01

    Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…

  9. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  10. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  11. End-of-Kindergarten Spelling Outcomes: How Can Spelling Error Analysis Data Inform Beginning Reading Instruction?

    PubMed Central

    Lee, Julia Ai Cheng; Otaiba, Stephanie Al

    2016-01-01

    In this article, the authors examined the spelling performance of 430 kindergarteners, which included a high risk sample, to determine the relations between end of kindergarten reading and spelling in a high quality language arts setting. The spelling outcomes including the spelling errors between the good and the poor readers were described, analyzed, and compared. The findings suggest that not all the children have acquired the desired standard as outlined by the Common Core State Standards. In addition, not every good reader is a good speller and that not every poor speller is a poor reader. The study shows that spelling tasks that are accompanied by spelling errors analysis provide a powerful window for making instructional sense of children’s spelling errors and for individualizing spelling instructional strategies. PMID:28706433

  12. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  13. Medical students' experiences with medical errors: an analysis of medical student essays.

    PubMed

    Martinez, William; Lo, Bernard

    2008-07-01

    This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.

  14. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.

    2012-01-01

    The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.

  15. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  16. Empirical Synthesis of the Effect of Standard Error of Measurement on Decisions Made within Brief Experimental Analyses of Reading Fluency

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Taylor, Crystal N.; Warmbold-Brann, Kristy L.; Preast, June L.; Hosp, John L.; Ford, Jeremy W.

    2017-01-01

    Intervention researchers often use curriculum-based measurement of reading fluency (CBM-R) with a brief experimental analysis (BEA) to identify an effective intervention for individual students. The current study synthesized data from 22 studies that used CBM-R data within a BEA by computing the standard error of measure (SEM) for the median data…

  17. The performance of projective standardization for digital subtraction radiography.

    PubMed

    Mol, André; Dunn, Stanley M

    2003-09-01

    We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.

  18. Evaluation of lens distortion errors in video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo

    1993-01-01

    In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.

  19. Decreasing patient identification band errors by standardizing processes.

    PubMed

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  20. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  1. Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Spitzer, Cary R.

    1992-01-01

    Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.

  2. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  3. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    NASA Astrophysics Data System (ADS)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  4. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  5. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  6. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    PubMed

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  7. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  8. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  9. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  11. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  12. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    NASA Astrophysics Data System (ADS)

    Sun, Z. J.; Wells, D.; Segebade, C.; Green, J.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  13. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    PubMed

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  14. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  16. The performance of the standard rate turn (SRT) by student naval helicopter pilots.

    PubMed

    Chapman, F; Temme, L A; Still, D L

    2001-04-01

    During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.

  17. Qualitative Examination of Children's Naming Skills through Test Adaptations.

    ERIC Educational Resources Information Center

    Fried-Oken, Melanie

    1987-01-01

    The Double Administration Naming Technique assists clinicians in obtaining qualitative information about a client's visual confrontation naming skills through administration of a standard naming test; readministration of the same test; identification of single and double errors; cuing for double naming errors; and qualitative analysis of naming…

  18. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  19. Current Issues in the Design and Information Content of Instrument Approach Charts

    DOT National Transportation Integrated Search

    1995-03-01

    This report documents an analysis and interview effort conducted to identify common operational errors made using : current Instrument Approach Plates (IAP), Standard Terminal Arrival Route (STAR) charts. Standard Instrument Departure : (SID) charts,...

  20. Application of the precipitation-runoff modeling system to the Ah- shi-sle-pah Wash watershed, San Juan County, New Mexico

    USGS Publications Warehouse

    Hejl, H.R.

    1989-01-01

    The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)

  1. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  2. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  3. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  4. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  5. Mean annual runoff and peak flow estimates based on channel geometry of streams in northeastern and western Montana

    USGS Publications Warehouse

    Parrett, Charles; Omang, R.J.; Hull, J.A.

    1983-01-01

    Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)

  6. Data Assimilation in the Presence of Forecast Bias: The GEOS Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Todling, Ricardo

    1999-01-01

    We describe the application of the unbiased sequential analysis algorithm developed by Dee and da Silva (1998) to the GEOS DAS moisture analysis. The algorithm estimates the persistent component of model error using rawinsonde observations and adjusts the first-guess moisture field accordingly. Results of two seasonal data assimilation cycles show that moisture analysis bias is almost completely eliminated in all observed regions. The improved analyses cause a sizable reduction in the 6h-forecast bias and a marginal improvement in the error standard deviations.

  7. Autonomous satellite navigation by stellar refraction

    NASA Technical Reports Server (NTRS)

    Gounley, R.; White, R.; Gai, E.

    1983-01-01

    This paper describes an error analysis of an autonomous navigator using refraction measurements of starlight passing through the upper atmosphere. The analysis is based on a discrete linear Kalman filter. The filter generated steady-state values of navigator performance for a variety of test cases. Results of these simulations show that in low-earth orbit position-error standard deviations of less than 0.100 km may be obtained using only 40 star sightings per orbit.

  8. Student Distractor Choices on the Mathematics Virginia Standards of Learning Middle School Assessments

    ERIC Educational Resources Information Center

    Lewis, Virginia Vimpeny

    2011-01-01

    Number Concepts; Measurement; Geometry; Probability; Statistics; and Patterns, Functions and Algebra. Procedural Errors were further categorized into the following content categories: Computation; Measurement; Statistics; and Patterns, Functions, and Algebra. The results of the analysis showed the main sources of error for 6th, 7th, and 8th…

  9. Computation Error Analysis: Students with Mathematics Difficulty Compared to Typically Achieving Students

    ERIC Educational Resources Information Center

    Nelson, Gena; Powell, Sarah R.

    2018-01-01

    Though proficiency with computation is highly emphasized in national mathematics standards, students with mathematics difficulty (MD) continue to struggle with computation. To learn more about the differences in computation error patterns between typically achieving students and students with MD, we assessed 478 third-grade students on a measure…

  10. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  11. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.

    PubMed

    Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.

  12. Improving patient safety through quality assurance.

    PubMed

    Raab, Stephen S

    2006-05-01

    Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.

  13. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  14. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  15. Determination of longitudinal aerodynamic derivatives using flight data from an icing research aircraft

    NASA Technical Reports Server (NTRS)

    Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.

    1989-01-01

    A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.

  16. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  18. The Influence of Dimensionality on Estimation in the Partial Credit Model.

    ERIC Educational Resources Information Center

    De Ayala, R. J.

    1995-01-01

    The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…

  19. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  20. Establishment of gold-quartz standard GQS-1

    USGS Publications Warehouse

    Millard, Hugh T.; Marinenko, John; McLane, John E.

    1969-01-01

    A homogeneous gold-quartz standard, GQS-1, was prepared from a heterogeneous gold-bearing quartz by chemical treatment. The concentration of gold in GQS-1 was determined by both instrumental neutron activation analysis and radioisotope dilution analysis to be 2.61?0.10 parts per million. Analysis of 10 samples of the standard by both instrumental neutron activation analysis and radioisotope dilution analysis failed to reveal heterogeneity within the standard. The precision of the analytical methods, expressed as standard error, was approximately 0.1 part per million. The analytical data were also used to estimate the average size of gold particles. The chemical treatment apparently reduced the average diameter of the gold particles by at least an order of magnitude and increased the concentration of gold grains by a factor of at least 4,000.

  1. 75 FR 17604 - Federal Motor Vehicle Safety Standards; Roof Crush Resistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-07

    ... Safety Analysis & Forensic Engineering, LLC (SAFE) brought to our attention errors in the preamble that incorrectly attributed to it the comments of another organization, Safety Analysis, Inc. Both of these... Safety Analysis, Inc. SAFE noted that there is no affiliation between SAFE and Safety Analysis, Inc. and...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Z. J.; Wells, D.; Green, J.

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switchingmore » the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.« less

  3. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  4. Error modelling of quantum Hall array resistance standards

    NASA Astrophysics Data System (ADS)

    Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa

    2018-04-01

    Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.

  5. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  6. Analyzing communication errors in an air medical transport service.

    PubMed

    Dalto, Joseph D; Weir, Charlene; Thomas, Frank

    2013-01-01

    Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  7. Three-dimensional quantitative structure-activity relationship studies on novel series of benzotriazine based compounds acting as Src inhibitors using CoMFA and CoMSIA.

    PubMed

    Gueto, Carlos; Ruiz, José L; Torres, Juan E; Méndez, Jefferson; Vivas-Reyes, Ricardo

    2008-03-01

    Comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on a series of benzotriazine derivatives, as Src inhibitors. Ligand molecular superimposition on the template structure was performed by database alignment method. The statistically significant model was established of 72 molecules, which were validated by a test set of six compounds. The CoMFA model yielded a q(2)=0.526, non cross-validated R(2) of 0.781, F value of 88.132, bootstrapped R(2) of 0.831, standard error of prediction=0.587, and standard error of estimate=0.351 while the CoMSIA model yielded the best predictive model with a q(2)=0.647, non cross-validated R(2) of 0.895, F value of 115.906, bootstrapped R(2) of 0.953, standard error of prediction=0.519, and standard error of estimate=0.178. The contour maps obtained from 3D-QSAR studies were appraised for activity trends for the molecules analyzed. Results indicate that small steric volumes in the hydrophobic region, electron-withdrawing groups next to the aryl linker region, and atoms close to the solvent accessible region increase the Src inhibitory activity of the compounds. In fact, adding substituents at positions 5, 6, and 8 of the benzotriazine nucleus were generated new compounds having a higher predicted activity. The data generated from the present study will further help to design novel, potent, and selective Src inhibitors as anticancer therapeutic agents.

  8. Cost-effectiveness of the stream-gaging program in New Jersey

    USGS Publications Warehouse

    Schopp, R.D.; Ulery, R.L.

    1984-01-01

    The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)

  9. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  10. SSC Geopositional Assessment of the Advanced Wide Field Sensor

    NASA Technical Reports Server (NTRS)

    Ross, Kenton

    2006-01-01

    The geopositional accuracy of the standard geocorrected product from the Advanced Wide Field Sensor (AWiFS) was evaluated using digital orthophoto quarter quadrangles and other reference sources of similar accuracy. Images were analyzed from summer 2004 through spring 2005. Forty to fifty check points were collected manually per scene and analyzed to determine overall circular error, estimates of horizontal bias, and other systematic errors. Measured errors were somewhat higher than the specifications for the data, but they were consistent with the analysis of the distributing vendor.

  11. Measuring quality in anatomic pathology.

    PubMed

    Raab, Stephen S; Grzybicki, Dana Marie

    2008-06-01

    This article focuses mainly on diagnostic accuracy in measuring quality in anatomic pathology, noting that measuring any quality metric is complex and demanding. The authors discuss standardization and its variability within and across areas of care delivery and efforts involving defining and measuring error to achieve pathology quality and patient safety. They propose that data linking error to patient outcome are critical for developing quality improvement initiatives targeting errors that cause patient harm in addition to using methods of root cause analysis, beyond those traditionally used in cytologic-histologic correlation, to assist in the development of error reduction and quality improvement plans.

  12. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Panel positioning error and support mechanism for a 30-m THz radio telescope

    NASA Astrophysics Data System (ADS)

    Yang, De-Hua; Okoh, Daniel; Zhou, Guo-Hua; Li, Ai-Hua; Li, Guo-Ping; Cheng, Jing-Quan

    2011-06-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  14. Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.

    PubMed

    Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T

    2017-07-01

    Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.

  15. Use of streamflow data to estimate base flowground-water recharge for Wisconsin

    USGS Publications Warehouse

    Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.

    2007-01-01

    The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.

  16. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  17. Measurement errors in voice-key naming latency for Hiragana.

    PubMed

    Yamada, Jun; Tamaoka, Katsuo

    2003-12-01

    This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.

  18. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  19. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  20. "A Doubt is at Best an Unsafe Standard": Measuring Sugar in the Early Bureau of Standards.

    PubMed

    Singerman, David

    2007-01-01

    In 1900, measuring the purity of sugar was a problem with serious economic consequences, and Congress created the Bureau of Standards in part to create accurate standards for saccharimetry. To direct the Polarimetry Section, Director Stratton hired the young chemist Frederick Bates, who went on to make significant contributions to the discipline of sugar chemistry. This paper explores four of Bates's greatest accomplishments: identifying the error caused by clarifying lead acetate, inventing the remarkable quartz-compensating saccharimeter with adjustable sensibility, discovering the significant error in the prevailing Ventzke saccharimetric scale, and reviving the International Commission for Uniform Methods of Sugar Analysis to unify the international community of chemists after the tensions of World War One. It also shows how accomplishments in saccharimetry reflected the growing importance and confidence of the Bureau of Standards, and how its scientific success smoothed the operation of American commerce.

  1. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  2. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  3. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  4. Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards

    DTIC Science & Technology

    2017-07-01

    conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style

  5. Three-dimensional quantitative structure-activity relationship CoMSIA/CoMFA and LeapFrog studies on novel series of bicyclo [4.1.0] heptanes derivatives as melanin-concentrating hormone receptor R1 antagonists.

    PubMed

    Morales-Bayuelo, Alejandro; Ayazo, Hernan; Vivas-Reyes, Ricardo

    2010-10-01

    Comparative molecular similarity indices analysis (CoMSIA) and comparative molecular field analysis (CoMFA) were performed on a series of bicyclo [4.1.0] heptanes derivatives as melanin-concentrating hormone receptor R1 antagonists (MCHR1 antagonists). Molecular superimposition of antagonists on the template structure was performed by database alignment method. The statistically significant model was established on sixty five molecules, which were validated by a test set of ten molecules. The CoMSIA model yielded the best predictive model with a q(2) = 0.639, non cross-validated R(2) of 0.953, F value of 92.802, bootstrapped R(2) of 0.971, standard error of prediction = 0.402, and standard error of estimate = 0.146 while the CoMFA model yielded a q(2) = 0.680, non cross-validated R(2) of 0.922, F value of 114.351, bootstrapped R(2) of 0.925, standard error of prediction = 0.364, and standard error of estimate = 0.180. CoMFA analysis maps were employed for generating a pseudo cavity for LeapFrog calculation. The contour maps obtained from 3D-QSAR studies were appraised for activity trends for the molecules analyzed. The results show the variability of steric and electrostatic contributions that determine the activity of the MCHR1 antagonist, with these results we proposed new antagonists that may be more potent than previously reported, these novel antagonists were designed from the addition of highly electronegative groups in the substituent di(i-C(3)H(7))N- of the bicycle [4.1.0] heptanes, using the model CoMFA which also was used for the molecular design using the technique LeapFrog. The data generated from the present study will further help to design novel, potent, and selective MCHR1 antagonists. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  6. Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana

    USGS Publications Warehouse

    Stewart, J.A.; Miller, R.L.; Butch, G.K.

    1986-01-01

    Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)

  7. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

  8. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    NASA Astrophysics Data System (ADS)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several conclusions are drawn from the analysis of the structure of the non-GI errors and the associated corrections, with particular emphasis on their dependence on the preconditioning parameter. The GI preconditioned central-moment LB method is validated for a number of complex flow benchmark problems and its effectiveness to achieve convergence acceleration and improvement in accuracy is demonstrated.

  9. A Critical Review of the Massachusetts Next Generation Science and Technology/Engineering Standards. Policy Brief

    ERIC Educational Resources Information Center

    Metzenberg, Stan

    2015-01-01

    Stan Metzenberg offers a critical analysis of the draft "Massachusetts Science and Technology/Engineering Standards," which are for pre-Kindergarten to Grade 8 and introductory high school courses. Metzenberg claims that the document reveals significant, unacceptable gaps in science content, as well as some notable errors and…

  10. Medication safety--reliability of preference cards.

    PubMed

    Dawson, Anthony; Orsini, Michael J; Cooper, Mary R; Wollenburg, Karol

    2005-09-01

    A CLINICAL ANALYSIS of surgeons' preference cards was initiated in one hospital as part of a comprehensive analysis to reduce medication-error risks by standardizing and simplifying the intraoperative medication-use process specific to the sterile field. THE PREFERENCE CARD ANALYSIS involved two subanalyses: a review of the information as it appeared on the cards and a failure mode and effects analysis of the process involved in using and maintaining the cards. THE ANALYSIS FOUND that the preference card system in use at this hospital is outdated. Variations and inconsistencies within the preference card system indicate that the use of preference cards as guides for medication selection for surgical procedures presents an opportunity for medication errors to occur.

  11. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  12. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  13. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  14. Using aggregate data to estimate the standard error of a treatment-covariate interaction in an individual patient data meta-analysis.

    PubMed

    Kovalchik, Stephanie A; Cumberland, William G

    2012-05-01

    Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment-covariate interaction in an individual patient data (IPD) meta-analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient-level data. We present methods that use aggregate data to estimate the standard error of an IPD meta-analysis' treatment-covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta-regression illustrates the practical utility of the methodology. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    PubMed

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  16. Accuracy of a Radiological Evaluation Method for Thoracic and Lumbar Spinal Curvatures Using Spinous Processes.

    PubMed

    Marchetti, Bárbara V; Candotti, Cláudia T; Raupp, Eduardo G; Oliveira, Eduardo B C; Furlanetto, Tássia S; Loss, Jefferson F

    The purpose of this study was to assess a radiographic method for spinal curvature evaluation in children, based on spinous processes, and identify its normality limits. The sample consisted of 90 radiographic examinations of the spines of children in the sagittal plane. Thoracic and lumbar curvatures were evaluated using angular (apex angle [AA]) and linear (sagittal arrow [SA]) measurements based on the spinous processes. The same curvatures were also evaluated using the Cobb angle (CA) method, which is considered the gold standard. For concurrent validity (AA vs CA), Pearson's product-moment correlation coefficient, root-mean-square error, Pitman- Morgan test, and Bland-Altman analysis were used. For reproducibility (AA, SA, and CA), the intraclass correlation coefficient, standard error of measurement, and minimal detectable change measurements were used. A significant correlation was found between CA and AA measurements, as was a low root-mean-square error. The mean difference between the measurements was 0° for thoracic and lumbar curvatures, and the mean standard deviations of the differences were ±5.9° and 6.9°, respectively. The intraclass correlation coefficients of AA and SA were similar to or higher than the gold standard (CA). The standard error of measurement and minimal detectable change of the AA were always lower than the CA. This study determined the concurrent validity, as well as intra- and interrater reproducibility, of the radiographic measurements of kyphosis and lordosis in children. Copyright © 2017. Published by Elsevier Inc.

  17. Cost effectiveness of stream-gaging program in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.

    1985-01-01

    This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.

  18. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  19. Short version of the Depression Anxiety Stress Scale-21: is it valid for Brazilian adolescents?

    PubMed Central

    da Silva, Hítalo Andrade; dos Passos, Muana Hiandra Pereira; de Oliveira, Valéria Mayaly Alves; Palmeira, Aline Cabral; Pitangui, Ana Carolina Rodarti; de Araújo, Rodrigo Cappato

    2016-01-01

    ABSTRACT Objective To evaluate the interday reproducibility, agreement and validity of the construct of short version of the Depression Anxiety Stress Scale-21 applied to adolescents. Methods The sample consisted of adolescents of both sexes, aged between 10 and 19 years, who were recruited from schools and sports centers. The validity of the construct was performed by exploratory factor analysis, and reliability was calculated for each construct using the intraclass correlation coefficient, standard error of measurement and the minimum detectable change. Results The factor analysis combining the items corresponding to anxiety and stress in a single factor, and depression in a second factor, showed a better match of all 21 items, with higher factor loadings in their respective constructs. The reproducibility values for depression were intraclass correlation coefficient with 0.86, standard error of measurement with 0.80, and minimum detectable change with 2.22; and, for anxiety/stress: intraclass correlation coefficient with 0.82, standard error of measurement with 1.80, and minimum detectable change with 4.99. Conclusion The short version of the Depression Anxiety Stress Scale-21 showed excellent values of reliability, and strong internal consistency. The two-factor model with condensation of the constructs anxiety and stress in a single factor was the most acceptable for the adolescent population. PMID:28076595

  20. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  1. Synthetic Spike-in Standards Improve Run-Specific Systematic Error Analysis for DNA and RNA Sequencing

    PubMed Central

    Zook, Justin M.; Samarov, Daniel; McDaniel, Jennifer; Sen, Shurjo K.; Salit, Marc

    2012-01-01

    While the importance of random sequencing errors decreases at higher DNA or RNA sequencing depths, systematic sequencing errors (SSEs) dominate at high sequencing depths and can be difficult to distinguish from biological variants. These SSEs can cause base quality scores to underestimate the probability of error at certain genomic positions, resulting in false positive variant calls, particularly in mixtures such as samples with RNA editing, tumors, circulating tumor cells, bacteria, mitochondrial heteroplasmy, or pooled DNA. Most algorithms proposed for correction of SSEs require a data set used to calculate association of SSEs with various features in the reads and sequence context. This data set is typically either from a part of the data set being “recalibrated” (Genome Analysis ToolKit, or GATK) or from a separate data set with special characteristics (SysCall). Here, we combine the advantages of these approaches by adding synthetic RNA spike-in standards to human RNA, and use GATK to recalibrate base quality scores with reads mapped to the spike-in standards. Compared to conventional GATK recalibration that uses reads mapped to the genome, spike-ins improve the accuracy of Illumina base quality scores by a mean of 5 Phred-scaled quality score units, and by as much as 13 units at CpG sites. In addition, since the spike-in data used for recalibration are independent of the genome being sequenced, our method allows run-specific recalibration even for the many species without a comprehensive and accurate SNP database. We also use GATK with the spike-in standards to demonstrate that the Illumina RNA sequencing runs overestimate quality scores for AC, CC, GC, GG, and TC dinucleotides, while SOLiD has less dinucleotide SSEs but more SSEs for certain cycles. We conclude that using these DNA and RNA spike-in standards with GATK improves base quality score recalibration. PMID:22859977

  2. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  3. Integrating Map Algebra and Statistical Modeling for Spatio- Temporal Analysis of Monthly Mean Daily Incident Photosynthetically Active Radiation (PAR) over a Complex Terrain.

    PubMed

    Evrendilek, Fatih

    2007-12-12

    This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.

  4. GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.

    2007-04-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  5. Analysis of Mars Express Ionogram Data via a Multilayer Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Wilkinson, Collin; Potter, Arron; Palmer, Greg; Duru, Firdevs

    2017-01-01

    Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS), which is a low frequency radar on the Mars Express (MEX) Spacecraft, can provide electron plasma densities of the ionosphere local at the spacecraft in addition to densities obtained with remote sounding. The local electron densities are obtained, with a standard error of about 2%, by measuring the electron plasma frequencies with an electronic ruler on ionograms, which are plots of echo intensity as a function of time and frequency. This is done by using a tool created at the University of Iowa (Duru et al., 2008). This approach is time consuming due to the rapid accumulation of ionogram data. In 2013, results from an algorithm-based analysis of ionograms were reported by Andrews et al., but this method did not improve the human error. In the interest of fast, accurate data interpretation, a neural network (NN) has been created based on the Fast Artificial Neural Network C libraries. This NN consists of artificial neurons, with 4 layers of 12960, 10000, 1000 and 1 neuron(s) each, consecutively. This network was trained using 40 iterations of 1000 orbits. The algorithm-based method of Andrews et al. had a standard error of 40%, while the neural network has achieved error on the order of 20%.

  6. Flavour and identification threshold detection overview of Slovak adepts for certified testing.

    PubMed

    Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian

    2016-07-01

    During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.

  7. Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua

    2018-01-01

    The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.

  8. “A Doubt is at Best an Unsafe Standard”: Measuring Sugar in the Early Bureau of Standards

    PubMed Central

    Singerman, David

    2007-01-01

    In 1900, measuring the purity of sugar was a problem with serious economic consequences, and Congress created the Bureau of Standards in part to create accurate standards for saccharimetry. To direct the Polarimetry Section, Director Stratton hired the young chemist Frederick Bates, who went on to make significant contributions to the discipline of sugar chemistry. This paper explores four of Bates’s greatest accomplishments: identifying the error caused by clarifying lead acetate, inventing the remarkable quartz-compensating saccharimeter with adjustable sensibility, discovering the significant error in the prevailing Ventzke saccharimetric scale, and reviving the International Commission for Uniform Methods of Sugar Analysis to unify the international community of chemists after the tensions of World War One. It also shows how accomplishments in saccharimetry reflected the growing importance and confidence of the Bureau of Standards, and how its scientific success smoothed the operation of American commerce. PMID:27110454

  9. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  10. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    PubMed

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  11. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual error of tropospheric delays, the mathematical formulation of the overbounding models are currently under development. This study introduces the main findings of the residual error analysis of the studied tropospheric delay models, and discusses the preliminary analysis of the integrity model development for safety-of-life applications.

  12. Flight calibration of compensated and uncompensated pitot-static airspeed probes and application of the probes to supersonic cruise vehicles

    NASA Technical Reports Server (NTRS)

    Webb, L. D.; Washington, H. P.

    1972-01-01

    Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.

  13. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    PubMed

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  15. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    PubMed Central

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  16. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    PubMed

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    ERIC Educational Resources Information Center

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  18. Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan T.

    2012-01-01

    Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…

  19. Automated Hypothesis Tests and Standard Errors for Nonstandard Problems with Description of Computer Package: A Draft.

    ERIC Educational Resources Information Center

    Lord, Frederic M.; Stocking, Martha

    A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…

  20. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  1. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  2. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  3. Evaluation of HACCP Plans of Food Industries: Case Study Conducted by the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service) of the Local Health Authority of Foggia, Italy

    PubMed Central

    Panunzio, Michele F.; Antoniciello, Antonietta; Pisano, Alessandra; Rosa, Giovanna

    2007-01-01

    With respect to food safety, many works have studied the effectiveness of self-monitoring plans of food companies, designed using the Hazard Analysis and Critical Control Point (HACCP) method. On the other hand, in-depth research has not been made concerning the adherence of the plans to HACCP standards. During our research, we evaluated 116 self-monitoring plans adopted by food companies located in the territory of the Local Health Authority (LHA) of Foggia, Italy. The general errors (terminology, philosophy and redundancy) and the specific errors (transversal plan, critical limits, hazard specificity, and lack of procedures) were standardized. Concerning the general errors, terminological errors pertain to half the plans examined, 47% include superfluous elements and 60% have repetitive subjects. With regards to the specific errors, 77% of the plans examined contained specific errors. The evaluation has pointed out the lack of comprehension of the HACCP system by the food companies and has allowed the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service), in its capacity as a control body, to intervene with the companies in order to improve designing HACCP plans. PMID:17911662

  4. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  6. Regionalization of harmonic-mean streamflows in Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Ruhl, Kevin J.

    1993-01-01

    Harmonic-mean streamflow (Qh), defined as the reciprocal of the arithmetic mean of the reciprocal daily streamflow values, was determined for selected stream sites in Kentucky. Daily mean discharges for the available period of record through the 1989 water year at 230 continuous record streamflow-gaging stations located in and adjacent to Kentucky were used in the analysis. Periods of record affected by regulation were identified and analyzed separately from periods of record unaffected by regulation. Record-extension procedures were applied to short-term stations to reducetime-sampling error and, thus, improve estimates of the long-term Qh. Techniques to estimate the Qh at ungaged stream sites in Kentucky were developed. A regression model relating Qh to total drainage area and streamflow-variability index was presented with example applications. The regression model has a standard error of estimate of 76 percent and a standard error of prediction of 78 percent.

  7. Determination of nutritional parameters of yoghurts by FT Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Czaja, Tomasz; Baranowska, Maria; Mazurek, Sylwester; Szostak, Roman

    2018-05-01

    FT-Raman quantitative analysis of nutritional parameters of yoghurts was performed with the help of partial least squares models. The relative standard errors of prediction for fat, lactose and protein determination in the quantified commercial samples equalled to 3.9, 3.2 and 3.6%, respectively. Models based on attenuated total reflectance spectra of the liquid yoghurt samples and of dried yoghurt films collected with the single reflection diamond accessory showed relative standard errors of prediction values of 1.6-5.0% and 2.7-5.2%, respectively, for the analysed components. Despite a relatively low signal-to-noise ratio in the obtained spectra, Raman spectroscopy, combined with chemometrics, constitutes a fast and powerful tool for macronutrients quantification in yoghurts. Errors received for attenuated total reflectance method were found to be relatively higher than those for Raman spectroscopy due to inhomogeneity of the analysed samples.

  8. Reducing number entry errors: solving a widespread, serious problem.

    PubMed

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  9. GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.

    2009-01-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  10. An analysis of four error detection and correction schemes for the proposed Federal standard 1024 (land mobile radio)

    NASA Astrophysics Data System (ADS)

    Lohrmann, Carol A.

    1990-03-01

    Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.

  11. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  12. Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont

    USGS Publications Warehouse

    Smath, J.A.; Blackey, F.E.

    1986-01-01

    Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)

  13. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  14. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  15. Population viability analysis with species occurrence data from museum collections.

    PubMed

    Skarpaas, Olav; Stabbetorp, Odd E

    2011-06-01

    The most comprehensive data on many species come from scientific collections. Thus, we developed a method of population viability analysis (PVA) in which this type of occurrence data can be used. In contrast to classical PVA, our approach accounts for the inherent observation error in occurrence data and allows the estimation of the population parameters needed for viability analysis. We tested the sensitivity of the approach to spatial resolution of the data, length of the time series, sampling effort, and detection probability with simulated data and conducted PVAs for common, rare, and threatened species. We compared the results of these PVAs with results of standard method PVAs in which observation error is ignored. Our method provided realistic estimates of population growth terms and quasi-extinction risk in cases in which the standard method without observation error could not. For low values of any of the sampling variables we tested, precision decreased, and in some cases biased estimates resulted. The results of our PVAs with the example species were consistent with information in the literature on these species. Our approach may facilitate PVA for a wide range of species of conservation concern for which demographic data are lacking but occurrence data are readily available. ©2011 Society for Conservation Biology.

  16. Matrix effect and correction by standard addition in quantitative liquid chromatographic-mass spectrometric analysis of diarrhetic shellfish poisoning toxins.

    PubMed

    Ito, Shinya; Tsukada, Katsuo

    2002-01-11

    An evaluation of the feasibility of liquid chromatography-mass spectrometry (LC-MS) with atmospheric pressure ionization was made for quantitation of four diarrhetic shellfish poisoning toxins, okadaic acid, dinophysistoxin-1, pectenotoxin-6 and yessotoxin in scallops. When LC-MS was applied to the analysis of scallop extracts, large signal suppressions were observed due to coeluting substances from the column. To compensate for these matrix signal suppressions, the standard addition method was applied. First, the sample was analyzed and then the sample involving the addition of calibration standards is analyzed. Although this method requires two LC-MS runs per analysis, effective correction of quantitative errors was found.

  17. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  18. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  19. Evolving geometrical heterogeneities of fault trace data

    NASA Astrophysics Data System (ADS)

    Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari

    2010-08-01

    We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.

  20. The impacts of observing flawed and flawless demonstrations on clinical skill learning.

    PubMed

    Domuracki, Kurt; Wong, Arthur; Olivieri, Lori; Grierson, Lawrence E M

    2015-02-01

    Clinical skills expertise can be advanced through accessible and cost-effective video-based observational practice activities. Previous findings suggest that the observation of performances of skills that include flaws can be beneficial to trainees. Observing the scope of variability within a skilled movement allows learners to develop strategies to manage the potential for and consequences associated with errors. This study tests this observational learning approach on the development of the skills of central line insertion (CLI). Medical trainees with no CLI experience (n = 39) were randomised to three observational practice groups: a group which viewed and assessed videos of an expert performing a CLI without any errors (F); a group which viewed and assessed videos that contained a mix of flawless and errorful performances (E), and a group which viewed the same videos as the E group but were also given information concerning the correctness of their assessments (FA). All participants interacted with their observational videos each day for 4 days. Following this period, participants returned to the laboratory and performed a simulation-based insertion, which was assessed using a standard checklist and a global rating scale for the skill. These ratings served as the dependent measures for analysis. The checklist analysis revealed no differences between observational learning groups (grand mean ± standard error: [20.3 ± 0.7]/25). However, the global rating analysis revealed a main effect of group (d.f.2,36 = 4.51, p = 0.018), which describes better CLI performance in the FA group, compared with the F and E groups. Observational practice that includes errors improves the global performance aspects of clinical skill learning as long as learners are given confirmation that what they are observing is errorful. These findings provide a refined perspective on the optimal organisation of skill education programmes that combine physical and observational practice activities. © 2015 John Wiley & Sons Ltd.

  1. Validation of the firefighter WFI treadmill protocol for predicting VO2 max.

    PubMed

    Dolezal, B A; Barr, D; Boland, D M; Smith, D L; Cooper, C B

    2015-03-01

    The Wellness-Fitness Initiative submaximal treadmill exercise test (WFI-TM) is recommended by the US National Fire Protection Agency to assess aerobic capacity (VO2 max) in firefighters. However, predicting VO2 max from submaximal tests can result in errors leading to erroneous conclusions about fitness. To investigate the level of agreement between VO2 max predicted from the WFI-TM against its direct measurement using exhaled gas analysis. The WFI-TM was performed to volitional fatigue. Differences between estimated VO2 max (derived from the WFI-TM equation) and direct measurement (exhaled gas analysis) were compared by paired t-test and agreement was determined using Pearson Product-Moment correlation and Bland-Altman analysis. Statistical significance was set at P < 0.05. Fifty-nine men performed the WFI-TM. Mean (standard deviation) values for estimated and measured VO2 max were 44.6 (3.4) and 43.6 (7.9) ml/kg/min, respectively (P < 0.01). The mean bias by which WFI-TM overestimated VO2 max was 0.9ml/kg/min with a 95% prediction interval of ±13.1. Prediction errors for 22% of subjects were within ±5%; 36% had errors greater than or equal to ±15% and 7% had greater than ±30% errors. The correlation between predicted and measured VO2 max was r = 0.55 (standard error of the estimate = 2.8ml/kg/min). WFI-TM predicts VO2 max with 11% error. There is a tendency to overestimate aerobic capacity in less fit individuals and to underestimate it in more fit individuals leading to a clustering of values around 42ml/kg/min, a criterion used by some fire departments to assess fitness for duty. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Methods for estimating streamflow at mountain fronts in southern New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1994-01-01

    The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.

  3. Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration

    NASA Technical Reports Server (NTRS)

    Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.

    1996-01-01

    An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.

  4. Addressing Common Student Technical Errors in Field Data Collection: An Analysis of a Citizen-Science Monitoring Project.

    PubMed

    Philippoff, Joanna; Baumgartner, Erin

    2016-03-01

    The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.

  5. Evaluation of Hand Written and Computerized Out-Patient Prescriptions in Urban Part of Central Gujarat

    PubMed Central

    Buch, Jatin; Kothari, Nitin; Shah, Nishal

    2016-01-01

    Introduction Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. Aim To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. Materials and Methods A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Statistical Analysis Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Results Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient’s age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. Conclusion As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice. PMID:27504305

  6. Cost effectiveness of the US Geological Survey's stream-gaging program in New York

    USGS Publications Warehouse

    Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.

    1986-01-01

    The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)

  7. Computing tools for implementing standards for single-case designs.

    PubMed

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  8. A General Approach for Estimating Scale Score Reliability for Panel Survey Data

    ERIC Educational Resources Information Center

    Biemer, Paul P.; Christ, Sharon L.; Wiesen, Christopher A.

    2009-01-01

    Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess…

  9. A new SAS program for behavioral analysis of Electrical Penetration Graph (EPG) data

    USDA-ARS?s Scientific Manuscript database

    A new program is introduced that uses SAS software to duplicate output of descriptive statistics from the Sarria Excel workbook for EPG waveform analysis. Not only are publishable means and standard errors or deviations output, the user also is guided through four relatively simple sub-programs for ...

  10. ALT space shuttle barometric altimeter altitude analysis

    NASA Technical Reports Server (NTRS)

    Killen, R.

    1978-01-01

    The accuracy was analyzed of the barometric altimeters onboard the space shuttle orbiter. Altitude estimates from the air data systems including the operational instrumentation and the developmental flight instrumentation were obtained for each of the approach and landing test flights. By comparing the barometric altitude estimates to altitudes derived from radar tracking data filtered through a Kalman filter and fully corrected for atmospheric refraction, the errors in the barometric altitudes were shown to be 4 to 5 percent of the Kalman altitudes. By comparing the altitude determined from the true atmosphere derived from weather balloon data to the altitude determined from the U.S. Standard Atmosphere of 1962, it was determined that the assumption of the Standard Atmosphere equations contributes roughly 75 percent of the total error in the baro estimates. After correcting the barometric altitude estimates using an average summer model atmosphere computed for the average latitude of the space shuttle landing sites, the residual error in the altitude estimates was reduced to less than 373 feet. This corresponds to an error of less than 1.5 percent for altitudes above 4000 feet for all flights.

  11. Monte Carlo simulation of errors in the anisotropy of magnetic susceptibility - A second-rank symmetric tensor. [for grains in sedimentary and volcanic rocks

    NASA Technical Reports Server (NTRS)

    Lienert, Barry R.

    1991-01-01

    Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.

  12. Anatomy of emotion: a 3D study of facial mimicry.

    PubMed

    Ferrario, V F; Sforza, C

    2007-01-01

    Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.

  13. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  14. Estimating the Standard Error of Robust Regression Estimates.

    DTIC Science & Technology

    1987-03-01

    error is 0(n4/5). In another Monte Carlo study, McKean and Schrader (1984) found that the tests resulting from studentizing ; by _3d/1/2 with d =0(n4 /5...44 4 -:~~-~*v: -. *;~ ~ ~*t .~ # ~ 44 % * ~ .%j % % % * . ., ~ -%. -14- Sheather, S. J. and McKean, J. W. (1987). A comparison of testing and...Wiley, New York. Welsch, R. E. (1980). Regression Sensitivity Analysis and Bounded- Influence Estimation, in Evaluation of Econometric Models eds. J

  15. Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2010-01-01

    In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…

  16. Evaluation of Satellite and Model Precipitation Products Over Turkey

    NASA Astrophysics Data System (ADS)

    Yilmaz, M. T.; Amjad, M.

    2017-12-01

    Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.

  17. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  18. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  19. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.

  20. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  1. The Calibration of Gloss Reference Standards

    NASA Astrophysics Data System (ADS)

    Budde, W.

    1980-04-01

    In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.

  2. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  4. Voxel-based statistical analysis of uncertainties associated with deformable image registration

    NASA Astrophysics Data System (ADS)

    Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang

    2013-09-01

    Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.

  5. SBKF Modeling and Analysis Plan: Buckling Analysis of Compression-Loaded Orthogrid and Isogrid Cylinders

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Hilburger, Mark W.

    2013-01-01

    This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.

  6. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  7. Standard deviation and standard error of the mean.

    PubMed

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  8. Standard deviation and standard error of the mean

    PubMed Central

    In, Junyong; Lee, Sangseok

    2015-01-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923

  9. The use of a single multielement standard for trace analysis in biological materials by external beam PIXE

    NASA Astrophysics Data System (ADS)

    Biswas, S. K.; Khaliquzzaman, M.; Islam, M. M.; Khan, A. H.

    1984-04-01

    The validity of the use of a single multielement standard for mass calibration in thick-target external beam PIXE analysis of biological materials has been investigated. In this study, the NBS orchard leaf, SRM 1571, was used as the basic standard for trace element analysis in other biological materials. Using the present procedure, the concentrations of K, Ca, Mn, Fe, Ni, Cu, Zn, Br, Rb and Sr were determined in several NBS reference materials such as bovine liver, spinach, rice flour, etc., generally in 20 μC irradiations with 2.0 MeV protons. The analytical results are compared with certified values of the NBS as well as with other measurements and the sources of errors are discussed.

  10. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  11. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  12. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  13. On the Validity and Sensitivity of the Phonics Screening Check: Erratum and Further Analysis

    ERIC Educational Resources Information Center

    Gilchrist, James M.; Snowling, Margaret J.

    2018-01-01

    Duff, Mengoni, Bailey and Snowling ("Journal of Research in Reading," 38: 109-123; 2015) evaluated the sensitivity and specificity of the phonics screening check against two reference standards. This report aims to correct a minor data error in the original article and to present further analysis of the data. The methods used are…

  14. A Test Reliability Analysis of an Abbreviated Version of the Pupil Control Ideology Form.

    ERIC Educational Resources Information Center

    Gaffney, Patrick V.

    A reliability analysis was conducted of an abbreviated, 10-item version of the Pupil Control Ideology Form (PCI), using the Cronbach's alpha technique (L. J. Cronbach, 1951) and the computation of the standard error of measurement. The PCI measures a teacher's orientation toward pupil control. Subjects were 168 preservice teachers from one private…

  15. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  16. Representations of Intervals and Optimal Error Bounds.

    DTIC Science & Technology

    1980-07-01

    OAA629-8O-C-0ONI UNCLASS I FI IEDMRC TSR-2098 NL 11111L 3 -2 11111 ~ 13.6 1111 125 .4 111.6 MCROCOPY RESOLUTION TEST CHART NATIONA’ 13UREAU OF STANDARDS...geometric and harmonic means, Excess width Work Unit Number 3 (Numerical Analysis and Computer Science) Sponsored by the United States Army under...example in the next section, following which the general theory will be dis- cussed. 3 . An example of an optimal point and error bound. A simple

  17. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  18. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  19. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.

  20. A Note on Standard Deviation and Standard Error

    ERIC Educational Resources Information Center

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  1. Reliability of drivers in urban intersections.

    PubMed

    Gstalter, Herbert; Fastenmeier, Wolfgang

    2010-01-01

    The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.

  2. Conditional Standard Errors, Reliability and Decision Consistency of Performance Levels Using Polytomous IRT.

    ERIC Educational Resources Information Center

    Wang, Tianyou; And Others

    M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…

  3. Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.

    NASA Astrophysics Data System (ADS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  4. SU-E-T-144: Effective Analysis of VMAT QA Generated Trajectory Log Files for Medical Accelerator Predictive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less

  5. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  6. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  7. Toward developing a standardized Arabic continuous text reading chart.

    PubMed

    Alabdulkader, Balsam; Leat, Susan Jennifer

    Near visual acuity is an essential measurement during an oculo-visual assessment. Short duration continuous text reading charts measure reading acuity and other aspects of reading performance. There is no standardized version of such chart in Arabic. The aim of this study is to create sentences of equal readability to use in the development of a standardized Arabic continuous text reading chart. Initially, 109 Arabic pairs of sentences were created for use in constructing a chart with similar layout to the Colenbrander chart. They were created to have the same grade level of difficulty and physical length. Fifty-three adults and sixteen children were recruited to validate the sentences. Reading speed in correct words per minute (CWPM) and standard length words per minute (SLWPM) was measured and errors were counted. Criteria based on reading speed and errors made in each sentence pair were used to exclude sentence pairs with more outlying characteristics, and to select the final group of sentence pairs. Forty-five sentence pairs were selected according to the elimination criteria. For adults, the average reading speed for the final sentences was 166 CWPM and 187 SLWPM and the average number of errors per sentence pair was 0.21. Childrens' average reading speed for the final group of sentences was 61 CWPM and 72 SLWPM. Their average error rate was 1.71. The reliability analysis showed that the final 45 sentence pairs are highly comparable. They will be used in constructing an Arabic short duration continuous text reading chart. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  8. Lost in Translation: the Case for Integrated Testing

    NASA Technical Reports Server (NTRS)

    Young, Aaron

    2017-01-01

    The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.

  9. Improving the accuracy of hyaluronic acid molecular weight estimation by conventional size exclusion chromatography.

    PubMed

    Shanmuga Doss, Sreeja; Bhatt, Nirav Pravinbhai; Jayaraman, Guhan

    2017-08-15

    There is an unreasonably high variation in the literature reports on molecular weight of hyaluronic acid (HA) estimated using conventional size exclusion chromatography (SEC). This variation is most likely due to errors in estimation. Working with commercially available HA molecular weight standards, this work examines the extent of error in molecular weight estimation due to two factors: use of non-HA based calibration and concentration of sample injected into the SEC column. We develop a multivariate regression correlation to correct for concentration effect. Our analysis showed that, SEC calibration based on non-HA standards like polyethylene oxide and pullulan led to approximately 2 and 10 times overestimation, respectively, when compared to HA-based calibration. Further, we found that injected sample concentration has an effect on molecular weight estimation. Even at 1g/l injected sample concentration, HA molecular weight standards of 0.7 and 1.64MDa showed appreciable underestimation of 11-24%. The multivariate correlation developed was found to reduce error in estimations at 1g/l to <4%. The correlation was also successfully applied to accurately estimate the molecular weight of HA produced by a recombinant Lactococcus lactis fermentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  11. Does a better model yield a better argument? An info-gap analysis

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2017-04-01

    Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.

  12. Measurement Error and Environmental Epidemiology: A Policy Perspective

    PubMed Central

    Edwards, Jessie K.; Keil, Alexander P.

    2017-01-01

    Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941

  13. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  14. Comparative Cost-Effectiveness Analysis of Three Different Automated Medication Systems Implemented in a Danish Hospital Setting.

    PubMed

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2018-02-01

    Automated medication systems have been found to reduce errors in the medication process, but little is known about the cost-effectiveness of such systems. The objective of this study was to perform a model-based indirect cost-effectiveness comparison of three different, real-world automated medication systems compared with current standard practice. The considered automated medication systems were a patient-specific automated medication system (psAMS), a non-patient-specific automated medication system (npsAMS), and a complex automated medication system (cAMS). The economic evaluation used original effect and cost data from prospective, controlled, before-and-after studies of medication systems implemented at a Danish hematological ward and an acute medical unit. Effectiveness was described as the proportion of clinical and procedural error opportunities that were associated with one or more errors. An error was defined as a deviation from the electronic prescription, from standard hospital policy, or from written procedures. The cost assessment was based on 6-month standardization of observed cost data. The model-based comparative cost-effectiveness analyses were conducted with system-specific assumptions of the effect size and costs in scenarios with consumptions of 15,000, 30,000, and 45,000 doses per 6-month period. With 30,000 doses the cost-effectiveness model showed that the cost-effectiveness ratio expressed as the cost per avoided clinical error was €24 for the psAMS, €26 for the npsAMS, and €386 for the cAMS. Comparison of the cost-effectiveness of the three systems in relation to different valuations of an avoided error showed that the psAMS was the most cost-effective system regardless of error type or valuation. The model-based indirect comparison against the conventional practice showed that psAMS and npsAMS were more cost-effective than the cAMS alternative, and that psAMS was more cost-effective than npsAMS.

  15. Advanced error diagnostics of the CMAQ and Chimere modelling systems within the AQMEII3 model evaluation framework

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano

    2017-09-01

    The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.

  16. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  17. Effects of auditory radio interference on a fine, continuous, open motor skill.

    PubMed

    Lazar, J M; Koceja, D M; Morris, H H

    1995-06-01

    The effects of human speech on a fine, continuous, and open motor skill were examined. A tape of auditory human radio traffic was injected into a tank gunnery simulator during each training session for 4 wk. of training for 3 hr. a week. The dependent variables were identification time, fire time, kill time, systems errors, and acquisition errors. These were measured by the Unit Conduct Of Fire Trainer (UCOFT). The interference was interjected into the UCOFT Tank Table VIII gunnery test. A Solomon four-group design was used. A 2 x 2 analysis of variance was used to assess whether interference gunnery training resulted in improvements in interference posttest scores. During the first three weeks of training, the interference group committed 106% more systems errors and 75% more acquisition errors than the standard group. The interference training condition was associated with a significant improvement from pre- to posttest of 44% in over-all UCOFT scores; however, when examined on the posttest the standard training did not improve performance significantly over the same period. It was concluded that auditory radio interference degrades performance of this fine, continuous, open motor skill, and interference training appears to abate the effects of this degradation.

  18. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  19. Data Quality Control and Maintenance for the Qweak Experiment

    NASA Astrophysics Data System (ADS)

    Heiner, Nicholas; Spayde, Damon

    2014-03-01

    The Qweak collaboration seeks to quantify the weak charge of a proton through the analysis of the parity-violating electron asymmetry in elastic electron-proton scattering. The asymmetry is calculated by measuring how many electrons deflect from a hydrogen target at the chosen scattering angle for aligned and anti-aligned electron spins, then evaluating the difference between the numbers of deflections that occurred for both polarization states. The weak charge can then be extracted from this data. Knowing the weak charge will allow us to calculate the electroweak mixing angle for the particular Q2 value of the chosen electrons, which the Standard Model makes a firm prediction for. Any significant deviation from this prediction would be a prime indicator of the existence of physics beyond what the Standard Model describes. After the experiment was conducted at Jefferson Lab, collected data was stored within a MySQL database for further analysis. I will present an overview of the database and its functions as well as a demonstration of the quality checks and maintenance performed on the data itself. These checks include an analysis of errors occurring throughout the experiment, specifically data acquisition errors within the main detector array, and an analysis of data cuts.

  20. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-09-01

    We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.

  1. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  2. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  3. Troposphere Delay Raytracing Applied in VLBI Analysis

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, Daniel; Gipson, John

    2014-12-01

    Tropospheric delay modeling error is one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium Range Forecasting (ECMWF) data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have instead determined the raytrace delay along the signal path through the three-dimensional troposphere refractivity field for each VLBI quasar observation. We calculated the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results using raytrace delay in the analysis of the CONT11 R&D sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 70% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 2/3 of all stations. The reference frame scale bias error was 0.02 ppb for raytracing versus 0.08 ppb and 0.06 ppb for VMF1 and NMF, respectively.

  4. Application of a spectrum standardization method for carbon analysis in coal using laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou

    2014-01-01

    Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.

  5. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  6. Analyzing Complex Survey Data.

    ERIC Educational Resources Information Center

    Rodgers-Farmer, Antoinette Y.; Davis, Diane

    2001-01-01

    Uses data from the 1994 AIDS Knowledge and Attitudes Supplement to the National Health Interview Survey (NHIS) to illustrate that biased point estimates, inappropriate standard errors, and misleading tests of significance can result from using traditional software packages, such as SPSS or SAS, for complex survey analysis. (BF)

  7. Blood gas testing and related measurements: National recommendations on behalf of the Croatian Society of Medical Biochemistry and Laboratory Medicine.

    PubMed

    Dukić, Lora; Kopčinović, Lara Milevoj; Dorotić, Adrijana; Baršić, Ivana

    2016-10-15

    Blood gas analysis (BGA) is exposed to risks of errors caused by improper sampling, transport and storage conditions. The Clinical and Laboratory Standards Institute (CLSI) generated documents with recommendations for avoidance of potential errors caused by sample mishandling. Two main documents related to BGA issued by the CLSI are GP43-A4 (former H11-A4) Procedures for the collection of arterial blood specimens; approved standard - fourth edition, and C46-A2 Blood gas and pH analysis and related measurements; approved guideline - second edition. Practices related to processing of blood gas samples are not standardized in the Republic of Croatia. Each institution has its own protocol for ordering, collection and analysis of blood gases. Although many laboratories use state of the art analyzers, still many preanalytical procedures remain unchanged. The objective of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is to standardize the procedures for BGA based on CLSI recommendations. The Working Group for Blood Gas Testing as part of the Committee for the Scientific Professional Development of the CSMBLM prepared a set of recommended protocols for sampling, transport, storage and processing of blood gas samples based on relevant CLSI documents, relevant literature search and on the results of Croatian survey study on practices and policies in acid-base testing. Recommendations are intended for laboratory professionals and all healthcare workers involved in blood gas processing.

  8. Blood gas testing and related measurements: National recommendations on behalf of the Croatian Society of Medical Biochemistry and Laboratory Medicine

    PubMed Central

    Dukić, Lora; Kopčinović, Lara Milevoj; Dorotić, Adrijana; Baršić, Ivana

    2016-01-01

    Blood gas analysis (BGA) is exposed to risks of errors caused by improper sampling, transport and storage conditions. The Clinical and Laboratory Standards Institute (CLSI) generated documents with recommendations for avoidance of potential errors caused by sample mishandling. Two main documents related to BGA issued by the CLSI are GP43-A4 (former H11-A4) Procedures for the collection of arterial blood specimens; approved standard – fourth edition, and C46-A2 Blood gas and pH analysis and related measurements; approved guideline – second edition. Practices related to processing of blood gas samples are not standardized in the Republic of Croatia. Each institution has its own protocol for ordering, collection and analysis of blood gases. Although many laboratories use state of the art analyzers, still many preanalytical procedures remain unchanged. The objective of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is to standardize the procedures for BGA based on CLSI recommendations. The Working Group for Blood Gas Testing as part of the Committee for the Scientific Professional Development of the CSMBLM prepared a set of recommended protocols for sampling, transport, storage and processing of blood gas samples based on relevant CLSI documents, relevant literature search and on the results of Croatian survey study on practices and policies in acid-base testing. Recommendations are intended for laboratory professionals and all healthcare workers involved in blood gas processing. PMID:27812301

  9. Reduction of medication errors related to sliding scale insulin by the introduction of a standardized order sheet.

    PubMed

    Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori

    2017-06-01

    Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.

  10. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  11. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  12. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  13. Evaluation of Eight Methods for Aligning Orientation of Two Coordinate Systems.

    PubMed

    Mecheri, Hakim; Robert-Lachaine, Xavier; Larue, Christian; Plamondon, André

    2016-08-01

    The aim of this study was to evaluate eight methods for aligning the orientation of two different local coordinate systems. Alignment is very important when combining two different systems of motion analysis. Two of the methods were developed specifically for biomechanical studies, and because there have been at least three decades of algorithm development in robotics, it was decided to include six methods from this field. To compare these methods, an Xsens sensor and two Optotrak clusters were attached to a Plexiglas plate. The first optical marker cluster was fixed on the sensor and 20 trials were recorded. The error of alignment was calculated for each trial, and the mean, the standard deviation, and the maximum values of this error over all trials were reported. One-way repeated measures analysis of variance revealed that the alignment error differed significantly across the eight methods. Post-hoc tests showed that the alignment error from the methods based on angular velocities was significantly lower than for the other methods. The method using angular velocities performed the best, with an average error of 0.17 ± 0.08 deg. We therefore recommend this method, which is easy to perform and provides accurate alignment.

  14. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  15. Merging gauge and satellite rainfall with specification of associated uncertainty across Australia

    NASA Astrophysics Data System (ADS)

    Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish

    2013-08-01

    Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.

  16. Single-sample method for the estimation of glomerular filtration rate in children

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tauxe, W.N.; Bagchi, A.; Tepe, P.G.

    1987-03-01

    A method for the determination of the glomerular filtration rate (GFR) in children which involves the use of a single-plasma sample (SPS) after the injection of a radioactive indicator such as radioiodine labeled diatrizoate (Hypaque) has been developed. This is analogous to previously published SPS techniques of effective renal plasma flow (ERPF) in adults and children and GFR SPS techniques in adults. As a reference standard, GFR has been calculated from compartment analysis of injected radiopharmaceuticals (Sapirstein Method). Theoretical volumes of distribution were calculated at various times after injection (Vt) by dividing the total injected counts (I) by the plasmamore » concentration (Ct) expressed in liters, determined by counting an aliquot of plasma in a well type scintillation counter. Errors of predicting GFR from the various Vt values were determined as the standard error of estimate (Sy.x) in ml/min. They were found to be relatively high early after injection and to fall to a nadir of 3.9 ml/min at 91 min. The Sy.x Vt relationship was examined in linear, quadratic, and exponential form, but the simpler linear relationship was found to yield the lowest error. Other data calculated from the compartment analysis of the reference plasma disappearance curves are presented, but at this time have apparently little clinical relevance.« less

  17. Heavy flavor decay of Zγ at CDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy M. Harrington-Taber

    2013-01-01

    Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less

  18. Cost effectiveness of a pharmacist-led information technology intervention for reducing rates of clinically important errors in medicines management in general practices (PINCER).

    PubMed

    Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J

    2014-06-01

    We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.

  19. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  20. Estimates of monthly streamflow characteristics at selected sites in the upper Missouri River basin, Montana, base period water years 1937-86

    USGS Publications Warehouse

    Parrett, Charles; Johnson, D.R.; Hull, J.A.

    1989-01-01

    Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)

  1. In Vivo Precision of Digital Topological Skeletonization Based Individual Trabecula Segmentation (ITS) Analysis of Trabecular Microstructure at the Distal Radius and Tibia by HR-pQCT.

    PubMed

    Zhou, Bin; Zhang, Zhendong; Wang, Ji; Yu, Y Eric; Liu, Xiaowei Sherry; Nishiyama, Kyle K; Rubin, Mishaela R; Shane, Elizabeth; Bilezikian, John P; Guo, X Edward

    2016-06-01

    Trabecular plate and rod microstructure plays a dominant role in the apparent mechanical properties of trabecular bone. With high-resolution computed tomography (CT) images, digital topological analysis (DTA) including skeletonization and topological classification was applied to transform the trabecular three-dimensional (3D) network into surface and curve skeletons. Using the DTA-based topological analysis and a new reconstruction/recovery scheme, individual trabecula segmentation (ITS) was developed to segment individual trabecular plates and rods and quantify the trabecular plate- and rod-related morphological parameters. High-resolution peripheral quantitative computed tomography (HR-pQCT) is an emerging in vivo imaging technique to visualize 3D bone microstructure. Based on HR-pQCT images, ITS was applied to various HR-pQCT datasets to examine trabecular plate- and rod-related microstructure and has demonstrated great potential in cross-sectional and longitudinal clinical applications. However, the reproducibility of ITS has not been fully determined. The aim of the current study is to quantify the precision errors of ITS plate-rod microstructural parameters. In addition, we utilized three different frequently used contour techniques to separate trabecular and cortical bone and to evaluate their effect on ITS measurements. Overall, good reproducibility was found for the standard HR-pQCT parameters with precision errors for volumetric BMD and bone size between 0.2%-2.0%, and trabecular bone microstructure between 4.9%-6.7% at the radius and tibia. High reproducibility was also achieved for ITS measurements using all three different contour techniques. For example, using automatic contour technology, low precision errors were found for plate and rod trabecular number (pTb.N, rTb.N, 0.9% and 3.6%), plate and rod trabecular thickness (pTb.Th, rTb.Th, 0.6% and 1.7%), plate trabecular surface (pTb.S, 3.4%), rod trabecular length (rTb.ℓ, 0.8%), and plate-plate junction density (P-P Junc.D, 2.3%) at the tibia. The precision errors at the radius were similar to those at the tibia. In addition, precision errors were affected by the contour technique. At the tibia, precision error by the manual contour method was significantly different from automatic and standard contour methods for pTb.N, rTb.N and rTb.Th. Precision error using the manual contour method was also significantly different from the standard contour method for rod trabecular number (rTb.N), rod trabecular thickness (rTb.Th), rod-rod and plate-rod junction densities (R-R Junc.D and P-R Junc.D) at the tibia. At the radius, the precision error was similar between the three different contour methods. Image quality was also found to significantly affect the ITS reproducibility. We concluded that ITS parameters are highly reproducible, giving assurance that future cross-sectional and longitudinal clinical HR-pQCT studies are feasible in the context of limited sample sizes.

  2. A Swiss cheese error detection method for real-time EPID-based quality assurance and error prevention.

    PubMed

    Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V

    2017-04-01

    To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.

  3. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  4. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  5. Physics Notes

    ERIC Educational Resources Information Center

    School Science Review, 1972

    1972-01-01

    Short articles describe the production, photography, and analysis of diffraction patterns using a small laser, a technique for measuring electrical resistance without a standard resistor, a demonstration of a thermocouple effect in a galvanometer with a built-in light source, and a common error in deriving the expression for centripetal force. (AL)

  6. [Phenylephrine dosing error in Intensive Care Unit. Case of the trimester].

    PubMed

    2013-01-01

    A real clinical case reported to SENSAR is presented. A patient admitted to the surgical intensive care unit following a lung resection, suffered arterial hypotension. The nurse was asked to give the patient 1 mL of phenylephrine. A few seconds afterwards, the patient experienced a hypertensive crisis, which resolved spontaneously without damage. Thereafter, the nurse was interviewed and a dosing error was identified: she had mistakenly given the patient 1 mg of phenylephrine (1 mL) instead of 100 mcg (1 mL of the standard dilution, 1mg in 10 mL). The incident analysis revealed latent factors (event triggers) due to the lack of protocols and standard operating procedures, communication errors among team members (physician-nurse), suboptimal training, and underdeveloped safety culture. In order to preempt similar incidents in the future, the following actions were implemented in the surgical intensive care unit: a protocol for bolus and short lived infusions (<30 min) was developed and to close the communication gap through the adoption of communication techniques. The protocol was designed by physicians and nurses to standardize the administration of drugs with high potential for errors. To close the communication gap, repeated checks about saying and understanding was proposed ("closed loop"). Labeling syringes with the drug dilution was also recommended. Copyright © 2013 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.

  7. Application of human reliability analysis to nursing errors in hospitals.

    PubMed

    Inoue, Kayoko; Koizumi, Akio

    2004-12-01

    Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).

  8. Improvements in Electron-Probe Microanalysis: Applications to Terrestrial, Extraterrestrial, and Space-Grown Materials

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2004-01-01

    Improvement in the accuracy of electron-probe microanalysis (EPMA) has been accomplished by critical assessment of standards, correction algorithms, and mass absorption coefficient data sets. Experimental measurement of relative x-ray intensities at multiple accelerating potential highlights errors in the absorption coefficient. The factor method has been applied to the evaluation of systematic errors in the analysis of semiconductor and silicate minds. Accurate EPMA of Martian soil stimulant is necessary in studies that build on Martian rover data in anticipation of missions to Mars.

  9. Audio-frequency analysis of inductive voltage dividers based on structural models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avramov, S.; Oldham, N.M.; Koffman, A.D.

    1994-12-31

    A Binary Inductive Voltage Divider (BIVD) is compared with a Decade Inductive Voltage Divider (DIVD) in an automatic IVD bridge. New detection and injection circuitry was designed and used to evaluate the IVDs with either the input or output tied to ground potential. In the audio frequency range the DIVD and BIVD error patterns are characterized for both in-phase and quadrature components. Differences between results obtained using a new error decomposition scheme based on structural modeling, and measurements using conventional IVD standards are reported.

  10. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    DTIC Science & Technology

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  11. Extraction of the proton radius from electron-proton scattering data

    DOE PAGES

    Lee, Gabriel; Arrington, John R.; Hill, Richard J.

    2015-07-27

    We perform a new analysis of electron-proton scattering data to determine the proton electric and magnetic radii, enforcing model-independent constraints from form factor analyticity. A wide-ranging study of possible systematic effects is performed. An improved analysis is developed that rebins data taken at identical kinematic settings and avoids a scaling assumption of systematic errors with statistical errors. Employing standard models for radiative corrections, our improved analysis of the 2010 Mainz A1 Collaboration data yields a proton electric radius r E = 0.895(20) fm and magnetic radius r M = 0.776(38) fm. A similar analysis applied to world data (excluding Mainzmore » data) implies r E = 0.916(24) fm and r M = 0.914(35) fm. The Mainz and world values of the charge radius are consistent, and a simple combination yields a value r E = 0.904(15) fm that is 4σ larger than the CREMA Collaboration muonic hydrogen determination. The Mainz and world values of the magnetic radius differ by 2.7σ, and a simple average yields r M = 0.851(26) fm. As a result, the circumstances under which published muonic hydrogen and electron scattering data could be reconciled are discussed, including a possible deficiency in the standard radiative correction model which requires further analysis.« less

  12. Preanalytical errors in medical laboratories: a review of the available methodologies of data collection and analysis.

    PubMed

    West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael

    2017-01-01

    Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.

  13. Failure mode and effective analysis ameliorate awareness of medical errors: a 4-year prospective observational study in critically ill children.

    PubMed

    Daverio, Marco; Fino, Giuliana; Luca, Brugnaro; Zaggia, Cristina; Pettenazzo, Andrea; Parpaiola, Antonella; Lago, Paola; Amigoni, Angela

    2015-12-01

    Errors in are estimated to occur with an incidence of 3.7-16.6% in hospitalized patients. The application of systems for detection of adverse events is becoming a widespread reality in healthcare. Incident reporting (IR) and failure mode and effective analysis (FMEA) are strategies widely used to detect errors, but no studies have combined them in the setting of a pediatric intensive care unit (PICU). The aim of our study was to describe the trend of IR in a PICU and evaluate the effect of FMEA application on the number and severity of the errors detected. With this prospective observational study, we evaluated the frequency IR documented in standard IR forms completed from January 2009 to December 2012 in the PICU of Woman's and Child's Health Department of Padova. On the basis of their severity, errors were classified as: without outcome (55%), with minor outcome (16%), with moderate outcome (10%), and with major outcome (3%); 16% of reported incidents were 'near misses'. We compared the data before and after the introduction of FMEA. Sixty-nine errors were registered, 59 (86%) concerning drug therapy (83% during prescription). Compared to 2009-2010, in 2011-2012, we noted an increase of reported errors (43 vs 26) with a reduction of their severity (21% vs 8% 'near misses' and 65% vs 38% errors with no outcome). With the introduction of FMEA, we obtained an increased awareness in error reporting. Application of these systems will improve the quality of healthcare services. © 2015 John Wiley & Sons Ltd.

  14. Effect of Contact Damage on the Strength of Ceramic Materials.

    DTIC Science & Technology

    1982-10-01

    variables that are important to erosion, and a multivariate , linear regression analysis is used to fit the data to the dimensional analysis. The...of Equations 7 and 8 by a multivariable regression analysis (room tem- perature data) Exponent Regression Standard error Computed coefficient of...1980) 593. WEAVER, Proc. Brit. Ceram. Soc. 22 (1973) 125. 39. P. W. BRIDGMAN, "Dimensional Analaysis ", (Yale 18. R. W. RICE, S. W. FREIMAN and P. F

  15. Statistical power for detecting trends with applications to seabird monitoring

    USGS Publications Warehouse

    Hatch, Shyla A.

    2003-01-01

    Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.

  16. Virtual occlusal definition for orthognathic surgery.

    PubMed

    Liu, X J; Li, Q Q; Zhang, Z; Li, T T; Xie, Z; Zhang, Y

    2016-03-01

    Computer-assisted surgical simulation is being used increasingly in orthognathic surgery. However, occlusal definition is still undertaken using model surgery with subsequent digitization via surface scanning or cone beam computed tomography. A software tool has been developed and a workflow set up in order to achieve a virtual occlusal definition. The results of a validation study carried out on 60 models of normal occlusion are presented. Inter- and intra-user correlation tests were used to investigate the reproducibility of the manual setting point procedure. The errors between the virtually set positions (test) and the digitized manually set positions (gold standard) were compared. The consistency in virtual set positions performed by three individual users was investigated by one way analysis of variance test. Inter- and intra-observer correlation coefficients for manual setting points were all greater than 0.95. Overall, the median error between the test and the gold standard positions was 1.06mm. Errors did not differ among teeth (F=0.371, P>0.05). The errors were not significantly different from 1mm (P>0.05). There were no significant differences in the errors made by the three independent users (P>0.05). In conclusion, this workflow for virtual occlusal definition was found to be reliable and accurate. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  18. Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Bryant, Larry

    2014-01-01

    Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.

  19. Biases and Standard Errors of Standardized Regression Coefficients

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2011-01-01

    The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…

  20. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  1. Reproducibility of 3D kinematics and surface electromyography measurements of mastication.

    PubMed

    Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G

    2016-03-01

    The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Command Process Modeling & Risk Analysis

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  3. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety.

    PubMed

    Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie

    2014-01-01

    This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).

  4. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, T; Kumaraswamy, L

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less

  5. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  6. MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays.

    PubMed

    Li, Hui; Zhu, Yitan; Burnside, Elizabeth S; Drukker, Karen; Hoadley, Katherine A; Fan, Cheng; Conzen, Suzanne D; Whitman, Gary J; Sutton, Elizabeth J; Net, Jose M; Ganott, Marie; Huang, Erich; Morris, Elizabeth A; Perou, Charles M; Ji, Yuan; Giger, Maryellen L

    2016-11-01

    Purpose To investigate relationships between computer-extracted breast magnetic resonance (MR) imaging phenotypes with multigene assays of MammaPrint, Oncotype DX, and PAM50 to assess the role of radiomics in evaluating the risk of breast cancer recurrence. Materials and Methods Analysis was conducted on an institutional review board-approved retrospective data set of 84 deidentified, multi-institutional breast MR examinations from the National Cancer Institute Cancer Imaging Archive, along with clinical, histopathologic, and genomic data from The Cancer Genome Atlas. The data set of biopsy-proven invasive breast cancers included 74 (88%) ductal, eight (10%) lobular, and two (2%) mixed cancers. Of these, 73 (87%) were estrogen receptor positive, 67 (80%) were progesterone receptor positive, and 19 (23%) were human epidermal growth factor receptor 2 positive. For each case, computerized radiomics of the MR images yielded computer-extracted tumor phenotypes of size, shape, margin morphology, enhancement texture, and kinetic assessment. Regression and receiver operating characteristic analysis were conducted to assess the predictive ability of the MR radiomics features relative to the multigene assay classifications. Results Multiple linear regression analyses demonstrated significant associations (R 2 = 0.25-0.32, r = 0.5-0.56, P < .0001) between radiomics signatures and multigene assay recurrence scores. Important radiomics features included tumor size and enhancement texture, which indicated tumor heterogeneity. Use of radiomics in the task of distinguishing between good and poor prognosis yielded area under the receiver operating characteristic curve values of 0.88 (standard error, 0.05), 0.76 (standard error, 0.06), 0.68 (standard error, 0.08), and 0.55 (standard error, 0.09) for MammaPrint, Oncotype DX, PAM50 risk of relapse based on subtype, and PAM50 risk of relapse based on subtype and proliferation, respectively, with all but the latter showing statistical difference from chance. Conclusion Quantitative breast MR imaging radiomics shows promise for image-based phenotyping in assessing the risk of breast cancer recurrence. © RSNA, 2016 Online supplemental material is available for this article.

  7. Utilization of aerial laser scanning data in investigations of modern fortifications complexes in Poland. (Polish Title: Wykorzystanie danych lotniczego skaningu laserowego w metodyce badawczej zespołów fortyfikacji nowszej w Polsce)

    NASA Astrophysics Data System (ADS)

    Zawieska, D.; Ostrowski, W.; Antoszewski, M.

    2013-12-01

    Due to the turbulent history extremely reach and unique resources of military architectural objects (modern fortification complexes) are located in Poland. The paper presents results of analysis of utilization of aerial laser scanning data for identification and visualization of forts in Poland. A cloud of point from the ISOK Projects has been utilized for that purpose. Two types of areas are distinguished in this Project, covered by products of diversified standards: standards II - laser scanning of the increased density (12 points per sq.m.), standard I - laser scanning of the basic density (4 points per sq.m.). Investigations were carried out concerning the quality of geospatial data classification with respect to further topographic analysis of fortifications. These investigations were performed for four test sites, two test sites for each standard. Objects were selected in such a way that fortifications were characterized by the sufficient level of restoration and that at least one point located in forest and one point located in an open area could be located for each standard. The preliminary verification of the classification correctness was performed with the use of ArcGIS 10.1 software package, basing on the shaded Digital Elevation Model (DEM) and the Digital Fortification Model (DFM), an orthophotomap and the analysis of sections of the spatial cloud of points. Changes of classification of point clouds were introduced with the use of TerraSolid software package. Basing on the performed analysis two groups of errors of point cloud classification were detected. In the first group fragments of fortification facilities were classified with errors; in the case of the second group - entire elements of fortifications were classified with errors or they remained unclassified. The first type error, which occurs in the majority of cases, results in errors of 2x4 meters in object locations and variations of elevations of those fragments of DFM, which achieve up to 14 m. At present, fortifications are partially or entirely covered with forests or invasive vegetation. Therefore, the influence of the land cover and the terrain slope on the DEM quality, obtained from Lidar data, should be considered in evaluation of the ISOK data potential for topographic investigations of fortifications. Investigations performed in the world proved that if the area is covered by dense, 70 year old forests, where forest clearance is not performed, this may result in double decrease of the created DTM. (comparing to the open area). In the summary it may be stressed that performed experimental works proved the high usefulness of ISOK laser scanning data for identification of forms of fortifications and for their visualization. As opposed to conventional information acquisition methods (field inventory together with historical documents), laser scanning data is the new generation of geospatial data. They create the possibility to develop the new technology, to be utilized in protection and inventory of military architectural objects in Poland.

  8. Text Classification for Assisting Moderators in Online Health Communities

    PubMed Central

    Huh, Jina; Yetisgen-Yildiz, Meliha; Pratt, Wanda

    2013-01-01

    Objectives Patients increasingly visit online health communities to get help on managing health. The large scale of these online communities makes it impossible for the moderators to engage in all conversations; yet, some conversations need their expertise. Our work explores low-cost text classification methods to this new domain of determining whether a thread in an online health forum needs moderators’ help. Methods We employed a binary classifier on WebMD’s online diabetes community data. To train the classifier, we considered three feature types: (1) word unigram, (2) sentiment analysis features, and (3) thread length. We applied feature selection methods based on χ2 statistics and under sampling to account for unbalanced data. We then performed a qualitative error analysis to investigate the appropriateness of the gold standard. Results Using sentiment analysis features, feature selection methods, and balanced training data increased the AUC value up to 0.75 and the F1-score up to 0.54 compared to the baseline of using word unigrams with no feature selection methods on unbalanced data (0.65 AUC and 0.40 F1-score). The error analysis uncovered additional reasons for why moderators respond to patients’ posts. Discussion We showed how feature selection methods and balanced training data can improve the overall classification performance. We present implications of weighing precision versus recall for assisting moderators of online health communities. Our error analysis uncovered social, legal, and ethical issues around addressing community members’ needs. We also note challenges in producing a gold standard, and discuss potential solutions for addressing these challenges. Conclusion Social media environments provide popular venues in which patients gain health-related information. Our work contributes to understanding scalable solutions for providing moderators’ expertise in these large-scale, social media environments. PMID:24025513

  9. Assessment of Metronidazole Susceptibility in Helicobacter pylori: Statistical Validation and Error Rate Analysis of Breakpoints Determined by the Disk Diffusion Test

    PubMed Central

    Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José

    1999-01-01

    Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543

  10. Strategies for Teaching Fractions: Using Error Analysis for Intervention and Assessment

    ERIC Educational Resources Information Center

    Spangler, David B.

    2011-01-01

    Many students struggle with fractions and must understand them before learning higher-level math. Veteran educator David B. Spangler provides research-based tools that are aligned with NCTM and Common Core State Standards. He outlines powerful diagnostic methods for analyzing student work and providing timely, specific, and meaningful…

  11. Choice of the Metric for Effect Size in Meta-analysis.

    ERIC Educational Resources Information Center

    McGaw, Barry; Glass, Gene V.

    1980-01-01

    There are difficulties in expressing effect sizes on a common metric when some studies use transformed scales to express group differences, or use factorial designs or covariance adjustments to obtain a reduced error term. A common metric on which effect sizes may be standardized is described. (Author/RL)

  12. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  13. Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity

    ERIC Educational Resources Information Center

    Beasley, T. Mark

    2014-01-01

    Increasing the correlation between the independent variable and the mediator ("a" coefficient) increases the effect size ("ab") for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation caused by…

  14. Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy

    USDA-ARS?s Scientific Manuscript database

    A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...

  15. Multiple imputation of missing fMRI data in whole brain analysis

    PubMed Central

    Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.

    2012-01-01

    Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925

  16. Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies

    NASA Astrophysics Data System (ADS)

    Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.

    2017-04-01

    Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4%  ±  3.1% for CBAC and 3.5%  ±  3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a crucial step in order to obtain improved activity estimates. Presence of local errors in both MLACF and CBAC based reconstructions would require the use of a normal database for clinical assessment. However, further work is required in order to assess the clinical advantage of MLACF over CBAC based method.

  17. The effect of income and occupation on body mass index among women in the Cebu Longitudinal Health and Nutrition Surveys (1983-2002).

    PubMed

    Colchero, M Arantxa; Caballero, Benjamin; Bishai, David

    2008-05-01

    We assessed the effects of changes in income and occupational activities on changes in body weight among 2952 non-pregnant women enrolled in the Cebu Longitudinal Health and Nutrition Surveys between 1983 and 2002. On average, body mass index (BMI) among women occupied in low activities was 0.29 kg/m(2) (standard error 0.11) larger compared to women occupied in heavy activities. BMI among women involved in medium activities was on average 0.12 kg/m(2) (standard error 0.05) larger compared to women occupied in heavy activities. A one-unit increase in log household income in the previous survey was associated with a small and positive change in BMI of 0.006 kg/m(2) (standard error 0.02) but the effect was not significant. The trend of increasing body mass was higher in the late 1980s than during the 1990s. These period effects were stronger for the women who were younger at baseline and for women with low or medium activity levels. Our analysis suggests a trend in the environment over the last 20 years that has increased the susceptibility of Filipino women to larger body mass.

  18. An Analysis of the Tracking Performances of Two Straight-wing and Two Swept-wing Fighter Airplanes with Fixed Sights in a Standardized Test Maneuver

    NASA Technical Reports Server (NTRS)

    Ziff, Howard L; Rathert, George A; Gadeberg, Burnett L

    1953-01-01

    Standard air-to-air-gunnery tracking runs were conducted with F-51H, F8F-1, F-86A, and F-86E airplanes equipped with fixed gunsights. The tracking performances were documented over the normal operating range of altitude, Mach number, and normal acceleration factor for each airplane. The sources of error were studied by statistical analyses of the aim wander.

  19. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    PubMed

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  1. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  2. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  3. A flexible wearable sensor for knee flexion assessment during gait.

    PubMed

    Papi, Enrica; Bo, Yen Nee; McGregor, Alison H

    2018-05-01

    Gait analysis plays an important role in the diagnosis and management of patients with movement disorders but it is usually performed within a laboratory. Recently interest has shifted towards the possibility of conducting gait assessments in everyday environments thus facilitating long-term monitoring. This is possible by using wearable technologies rather than laboratory based equipment. This study aims to validate a novel wearable sensor system's ability to measure peak knee sagittal angles during gait. The proposed system comprises a flexible conductive polymer unit interfaced with a wireless acquisition node attached over the knee on a pair of leggings. Sixteen healthy volunteers participated to two gait assessments on separate occasions. Data was simultaneously collected from the novel sensor and a gold standard 10 camera motion capture system. The relationship between sensor signal and reference knee flexion angles was defined for each subject to allow the transformation of sensor voltage outputs to angular measures (degrees). The knee peak flexion angle from the sensor and reference system were compared by means of root mean square error (RMSE), absolute error, Bland-Altman plots and intra-class correlation coefficients (ICCs) to assess test-retest reliability. Comparisons of knee peak flexion angles calculated from the sensor and gold standard yielded an absolute error of 0.35(±2.9°) and RMSE of 1.2(±0.4)°. Good agreement was found between the two systems with the majority of data lying within the limits of agreement. The sensor demonstrated high test-retest reliability (ICCs>0.8). These results show the ability of the sensor to monitor knee peak sagittal angles with small margins of error and in agreement with the gold standard system. The sensor has potential to be used in clinical settings as a discreet, unobtrusive wearable device allowing for long-term gait analysis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    PubMed

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Tropospheric Delay Raytracing Applied in VLBI Analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; Eriksson, D.; Gipson, J. M.

    2013-12-01

    Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from ECMWF data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have determined the raytrace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results from analysis of the CONT11 R&D and the weekly operational R1+R4 experiment sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 66-72% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 65% of sites.

  6. Evaluation of centroiding algorithm error for Nano-JASMINE

    NASA Astrophysics Data System (ADS)

    Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki

    2014-08-01

    The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.

  7. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    PubMed

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  8. Geometric errors in 3D optical metrology systems

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Nafis, Chris

    2008-08-01

    The field of 3D optical metrology has seen significant growth in the commercial market in recent years. The methods of using structured light to obtain 3D range data is well documented in the literature, and continues to be an area of development in universities. However, the step between getting 3D data, and getting geometrically correct 3D data that can be used for metrology is not nearly as well developed. Mechanical metrology systems such as CMMs have long established standard means of verifying the geometric accuracies of their systems. Both local and volumentric measurments are characterized on such system using tooling balls, grid plates, and ball bars. This paper will explore the tools needed to characterize and calibrate an optical metrology system, and discuss the nature of the geometric errors often found in such systems, and suggest what may be a viable standard method of doing characterization of 3D optical systems. Finally, we will present a tradeoff analysis of ways to correct geometric errors in an optical systems considering what can be gained by hardware methods versus software corrections.

  9. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  10. Flow cytometry and conventional enumeration of microorganisms in ships' ballast water and marine samples.

    PubMed

    Joachimsthal, Eva L; Ivanov, Volodymyr; Tay, Joo-Hwa; Tay, Stephen T-L

    2003-03-01

    Conventional methods for bacteriological testing of water quality take long periods of time to complete. This makes them inappropriate for a shipping industry that is attempting to comply with the International Maritime Organization's anticipated regulations for ballast water discharge. Flow cytometry for the analysis of marine and ship's ballast water is a comparatively fast and accurate method. Compared to a 5% standard error for flow cytometry analysis the standard methods of culturing and epifluorescence analysis have errors of 2-58% and 10-30%, respectively. Also, unlike culturing methods, flow cytometry is capable of detecting both non-viable and viable but non-culturable microorganisms which can still pose health risks. The great variability in both cell concentrations and microbial content for the samples tested is an indication of the difficulties facing microbial monitoring programmes. The concentration of microorganisms in the ballast tank was generally lower than in local seawater. The proportion of aerobic, microaerophilic, and facultative anaerobic microorganisms present appeared to be influenced by conditions in the ballast tank. The gradual creation of anaerobic conditions in a ballast tank could lead to the accumulation of facultative anaerobic microorganisms, which might represent a potential source of pathogenic species.

  11. Monte Carlo simulation of edge placement error

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Estrella, Joel; Enomoto, Masashi

    2018-03-01

    In the discussion of edge placement error (EPE), we proposed interactive pattern fidelity error (IPFE) as an indicator to judge pass/fail of integrated patterns. IPFE consists of lower and upper layer EPEs (CD and center of gravity: COG) and overlay, which is decided from the combination of each maximum variation. We succeeded in obtaining the IPFE density function by Monte Carlo simulation. In the results, we also found that the standard deviation (σ) of each indicator should be controlled by 4.0σ, at the semiconductor grade, such as 100 billion patterns per die. Moreover, CD, COG and overlay were analyzed by analysis of variance (ANOVA); we can discuss all variations from wafer to wafer (WTW), pattern to pattern (PTP), line edge roughness (LWR) and stochastic pattern noise (SPN) on an equal footing. From the analysis results, we can determine that these variations belong to which process and tools. Furthermore, measurement length of LWR is also discussed in ANOVA. We propose that the measurement length for IPFE analysis should not be decided to the micro meter order, such as >2 μm length, but for which device is actually desired.

  12. IgRepertoireConstructor: a novel algorithm for antibody repertoire construction and immunoproteogenomics analysis

    PubMed Central

    Safonova, Yana; Bonissone, Stefano; Kurpilyansky, Eugene; Starostina, Ekaterina; Lapidus, Alla; Stinson, Jeremy; DePalatis, Laura; Sandoval, Wendy; Lill, Jennie; Pevzner, Pavel A.

    2015-01-01

    The analysis of concentrations of circulating antibodies in serum (antibody repertoire) is a fundamental, yet poorly studied, problem in immunoinformatics. The two current approaches to the analysis of antibody repertoires [next generation sequencing (NGS) and mass spectrometry (MS)] present difficult computational challenges since antibodies are not directly encoded in the germline but are extensively diversified by somatic recombination and hypermutations. Therefore, the protein database required for the interpretation of spectra from circulating antibodies is custom for each individual. Although such a database can be constructed via NGS, the reads generated by NGS are error-prone and even a single nucleotide error precludes identification of a peptide by the standard proteomics tools. Here, we present the IgRepertoireConstructor algorithm that performs error-correction of immunosequencing reads and uses mass spectra to validate the constructed antibody repertoires. Availability and implementation: IgRepertoireConstructor is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from http://bioinf.spbau.ru/igtools. Contact: ppevzner@ucsd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072509

  13. Investigation of several aspects of LANDSAT-4 data quality

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C. (Principal Investigator)

    1983-01-01

    No insurmountable problems in change detection analysis were found when portions of scenes collected simultaneously by LANDSAT 4 MSS and either LANDSAT 2 or 3. The cause of the periodic noise in LANDSAT 4 MSS images which had a RMS value of approximately 2DN should be corrected in the LANDSAT D instrument before its launch. Analysis of the P-tape of the Arkansas scene shows bands within the same focal plane very well registered except for the thermal band which was misregistered by approximately three 28.5 meter pixels in both directions. It is possible to derive tight confidence bounds for the registration errors. Preliminary analyses of the Sacramento and Arkansas scenes reveals a very high degree of consistency with earlier results for bands 3 vs 1, 3 vs 4, and 3 vs 5. Results are presented in table form. It is suggested that attention be given to the standard deviations of registrations errors to judge whether or not they will be within specification once any known mean registration errors are corrected. Techniques used for MTF analysis of a Washington scene produced noisy results.

  14. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  15. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    PubMed

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  16. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  17. Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.

    PubMed

    Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L

    2018-05-01

    Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.

  18. Breast Tissue Characterization with Photon-counting Spectral CT Imaging: A Postmortem Breast Study

    PubMed Central

    Ding, Huanjun; Klopfer, Michael J.; Ducote, Justin L.; Masaki, Fumitaro

    2014-01-01

    Purpose To investigate the feasibility of breast tissue characterization in terms of water, lipid, and protein contents with a spectral computed tomographic (CT) system based on a cadmium zinc telluride (CZT) photon-counting detector by using postmortem breasts. Materials and Methods Nineteen pairs of postmortem breasts were imaged with a CZT-based photon-counting spectral CT system with beam energy of 100 kVp. The mean glandular dose was estimated to be in the range of 1.8–2.2 mGy. The images were corrected for pulse pile-up and other artifacts by using spectral distortion corrections. Dual-energy decomposition was then applied to characterize each breast into water, lipid, and protein contents. The precision of the three-compartment characterization was evaluated by comparing the composition of right and left breasts, where the standard error of the estimations was determined. The results of dual-energy decomposition were compared by using averaged root mean square to chemical analysis, which was used as the reference standard. Results The standard errors of the estimations of the right-left correlations obtained from spectral CT were 7.4%, 6.7%, and 3.2% for water, lipid, and protein contents, respectively. Compared with the reference standard, the average root mean square error in breast tissue composition was 2.8%. Conclusion Spectral CT can be used to accurately quantify the water, lipid, and protein contents in breast tissue in a laboratory study by using postmortem specimens. © RSNA, 2014 PMID:24814180

  19. Intimate Partner Violence, 1993-2010

    MedlinePlus

    ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...

  20. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  1. Internal standards in fluorescent X-ray spectroscopy1 1 Publication authorized by the Director, U.S. Geological Survey.

    USGS Publications Warehouse

    Adler, I.; Axelrod, J.M.

    1955-01-01

    The use of internal standards in the analysis of ores and minerals of widely-varying matrix by means of fluorescent X-ray spectroscopy is frequently the most practical approach. Internal standards correct for absorption and enhancement effects except when an absorption edge falls between the comparison lines or a very strong emission line falls between the absorption edges responsible for the comparison lines. Particle size variations may introduce substantial errors. One method of coping with the particle size problem is grinding the sample with an added abrasive. ?? 1955.

  2. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  3. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  4. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.

  5. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A Standard for Command, Control, Communications and Computers (C4) Test Data Representation to Integrate with High-Performance Data Reduction

    DTIC Science & Technology

    2015-06-01

    events was ad - hoc and problematic due to time constraints and changing requirements. Determining errors in context and heuristics required expertise...area code ) 410-278-4678 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of Figures iv 1. Introduction 1...reduction code ...........8 1 1. Introduction Data reduction for analysis of Command, Control, Communications, and Computer (C4) network tests

  7. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  8. Should the Standard Count Be Excluded from Neutron Probe Calibration?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z. Fred

    About 6 decades after its introduction, the neutron probe remains one of the most accurate methods for indirect measurement of soil moisture content. Traditionally, the calibration of a neutron probe involves the ratio of the neutron count in the soil to a standard count, which is the neutron count in the fixed environment such as the probe shield or a specially-designed calibration tank. The drawback of this count-ratio-based calibration is that the error in the standard count is carried through to all the measurements. An alternative calibration is to use the neutron counts only, not the ratio, with proper correctionmore » for radioactive decay and counting time. To evaluate both approaches, the shield counts of a neutron probe used for three decades were analyzed. The results show that the surrounding conditions have a substantial effect on the standard count. The error in the standard count also impacts the calculation of water storage and could indicate false consistency among replicates. The analysis of the shield counts indicates negligible aging effect of the instrument over a period of 26 years. It is concluded that, by excluding the standard count, the use of the count-based calibration is appropriate and sometimes even better than ratio-based calibration. The count-based calibration is especially useful for historical data when the standard count was questionable or absent« less

  9. Quantitative Analysis of Ca, Mg, and K in the Roots of Angelica pubescens f. biserrata by Laser-Induced Breakdown Spectroscopy Combined with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.

    2018-03-01

    Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.

  10. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  11. FMEA: a model for reducing medical errors.

    PubMed

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  12. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  13. Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-04-01

    Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  14. Analysis of a first order phase locked loop in the presence of Gaussian noise

    NASA Technical Reports Server (NTRS)

    Blasche, P. R.

    1977-01-01

    A first-order digital phase locked loop is analyzed by application of a Markov chain model. Steady state loop error probabilities, phase standard deviation, and mean loop transient times are determined for various input signal to noise ratios. Results for direct loop simulation are presented for comparison.

  15. Using Multilevel Factor Analysis with Clustered Data: Investigating the Factor Structure of the Positive Values Scale

    ERIC Educational Resources Information Center

    Huang, Francis L.; Cornell, Dewey G.

    2016-01-01

    Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…

  16. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  17. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  18. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  19. Portable visible and near-infrared spectrophotometer for triglyceride measurements.

    PubMed

    Kobayashi, Takanori; Kato, Yukiko Hakariya; Tsukamoto, Megumi; Ikuta, Kazuyoshi; Sakudo, Akikazu

    2009-01-01

    An affordable and portable machine is required for the practical use of visible and near-infrared (Vis-NIR) spectroscopy. A portable fruit tester comprising a Vis-NIR spectrophotometer was modified for use in the transmittance mode and employed to quantify triglyceride levels in serum in combination with a chemometric analysis. Transmittance spectra collected in the 600- to 1100-nm region were subjected to a partial least-squares regression analysis and leave-out cross-validation to develop a chemometrics model for predicting triglyceride concentrations in serum. The model yielded a coefficient of determination in cross-validation (R2VAL) of 0.7831 with a standard error of cross-validation (SECV) of 43.68 mg/dl. The detection limit of the model was 148.79 mg/dl. Furthermore, masked samples predicted by the model yielded a coefficient of determination in prediction (R2PRED) of 0.6856 with a standard error of prediction (SEP) and detection limit of 61.54 and 159.38 mg/dl, respectively. The portable Vis-NIR spectrophotometer may prove convenient for the measurement of triglyceride concentrations in serum, although before practical use there remain obstacles, which are discussed.

  20. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  1. Bayesian operational modal analysis of Jiangyin Yangtze River Bridge

    NASA Astrophysics Data System (ADS)

    Brownjohn, James Mark William; Au, Siu-Kui; Zhu, Yichen; Sun, Zhen; Li, Binbin; Bassitt, James; Hudson, Emma; Sun, Hongbin

    2018-09-01

    Vibration testing of long span bridges is becoming a commissioning requirement, yet such exercises represent the extreme of experimental capability, with challenges for instrumentation (due to frequency range, resolution and km-order separation of sensor) and system identification (because of the extreme low frequencies). The challenge with instrumentation for modal analysis is managing synchronous data acquisition from sensors distributed widely apart inside and outside the structure. The ideal solution is precisely synchronised autonomous recorders that do not need cables, GPS or wireless communication. The challenge with system identification is to maximise the reliability of modal parameters through experimental design and subsequently to identify the parameters in terms of mean values and standard errors. The challenge is particularly severe for modes with low frequency and damping typical of long span bridges. One solution is to apply 'third generation' operational modal analysis procedures using Bayesian approaches in both the planning and analysis stages. The paper presents an exercise on the Jiangyin Yangtze River Bridge, a suspension bridge with a 1385 m main span. The exercise comprised planning of a test campaign to optimise the reliability of operational modal analysis, the deployment of a set of independent data acquisition units synchronised using precision oven controlled crystal oscillators and the subsequent identification of a set of modal parameters in terms of mean and variance errors. Although the bridge has had structural health monitoring technology installed since it was completed, this was the first full modal survey, aimed at identifying important features of the modal behaviour rather than providing fine resolution of mode shapes through the whole structure. Therefore, measurements were made in only the (south) tower, while torsional behaviour was identified by a single measurement using a pair of recorders across the carriageway. The modal survey revealed a first lateral symmetric mode with natural frequency 0.0536 Hz with standard error ±3.6% and damping ratio 4.4% with standard error ±88%. First vertical mode is antisymmetric with frequency 0.11 Hz ± 1.2% and damping ratio 4.9% ± 41%. A significant and novel element of the exercise was planning of the measurement setups and their necessary duration linked to prior estimation of the precision of the frequency and damping estimates. The second novelty is the use of the multi-sensor precision synchronised acquisition without external time reference on a structure of this scale. The challenges of ambient vibration testing and modal identification in a complex environment are addressed leveraging on advances in practical implementation and scientific understanding of the problem.

  2. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  3. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  4. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  5. A stitch in time saves nine: external quality assessment rounds demonstrate improved quality of biomarker analysis in lung cancer

    PubMed Central

    Keppens, Cleo; Tack, Véronique; Hart, Nils ‘t; Tembuyser, Lien; Ryska, Ales; Pauwels, Patrick; Zwaenepoel, Karen; Schuuring, Ed; Cabillic, Florian; Tornillo, Luigi; Warth, Arne; Weichert, Wilko; Dequeker, Elisabeth

    2018-01-01

    Biomarker analysis has become routine practice in the treatment of non-small cell lung cancer (NSCLC). To ensure high quality testing, participation to external quality assessment (EQA) schemes is essential. This article provides a longitudinal overview of the EQA performance for EGFR, ALK, and ROS1 analyses in NSCLC between 2012 and 2015. The four scheme years were organized by the European Society of Pathology according to the ISO 17043 standard. Participants were asked to analyze the provided tissue using their routine procedures. Analysis scores improved for individual laboratories upon participation to more EQA schemes, except for ROS1 immunohistochemistry (IHC). For EGFR analysis, scheme error rates were 18.8%, 14.1% and 7.5% in 2013, 2014 and 2015 respectively. For ALK testing, error rates decreased between 2012 and 2015 by 5.2%, 3.2% and 11.8% for the fluorescence in situ hybridization (FISH), FISH digital, and IHC subschemes, respectively. In contrast, for ROS1 error rates increased between 2014 and 2015 for FISH and IHC by 3.2% and 9.3%. Technical failures decreased over the years for all three markers. Results show that EQA contributes to an ameliorated performance for most predictive biomarkers in NSCLC. Room for improvement is still present, especially for ROS1 analysis. PMID:29755669

  6. [Proximate analysis of straw by near infrared spectroscopy (NIRS)].

    PubMed

    Huang, Cai-jin; Han, Lu-jia; Liu, Xian; Yang, Zeng-ling

    2009-04-01

    Proximate analysis is one of the routine analysis procedures in utilization of straw for biomass energy use. The present paper studied the applicability of rapid proximate analysis of straw by near infrared spectroscopy (NIRS) technology, in which the authors constructed the first NIRS models to predict volatile matter and fixed carbon contents of straw. NIRS models were developed using Foss 6500 spectrometer with spectra in the range of 1,108-2,492 nm to predict the contents of moisture, ash, volatile matter and fixed carbon in the directly cut straw samples; to predict ash, volatile matter and fixed carbon in the dried milled straw samples. For the models based on directly cut straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.92% and 0.76% for moisture, 0.94% and 0.84% for ash, 0.88% and 0.82% for volatile matter, and 0.75% and 0.65% for fixed carbon, respectively. For the models based on dried milled straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.98% and 0.54% for ash, 0.95% and 0.57% for volatile matter, and 0.78% and 0.61% for fixed carbon, respectively. It was concluded that NIRS models can predict accurately as an alternative analysis method, therefore rapid and simultaneous analysis of multicomponents can be achieved by NIRS technology, decreasing the cost of proximate analysis for straw.

  7. Impact of the HERA I+II combined data on the CT14 QCD global analysis

    NASA Astrophysics Data System (ADS)

    Dulat, S.; Hou, T.-J.; Gao, J.; Guzzi, M.; Huston, J.; Nadolsky, P.; Pumplin, J.; Schmidt, C.; Stump, D.; Yuan, C.-P.

    2016-11-01

    A brief description of the impact of the recent HERA run I+II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of PDFs is given. The new CT14HERA2 PDFs at NLO and NNLO are illustrated. They employ the same parametrization used in the CT14 analysis, but with an additional shape parameter for describing the strange quark PDF. The HERA I+II data are reasonably well described by both CT14 and CT14HERA2 PDFs, and differences are smaller than the PDF uncertainties of the standard CT14 analysis. Both sets are acceptable when the error estimates are calculated in the CTEQ-TEA (CT) methodology and the standard CT14 PDFs are recommended to be continuously used for the analysis of LHC measurements.

  8. Peripheral Quantitative Computed Tomography: Measurement Sensitivity in Persons With and Without Spinal Cord Injury

    PubMed Central

    Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.

    2012-01-01

    Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249

  9. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    PubMed Central

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  10. Prediction of texture and colour of dry-cured ham by visible and near infrared spectroscopy using a fiber optic probe.

    PubMed

    García-Rey, R M; García-Olmo, J; De Pedro, E; Quiles-Zafra, R; Luque de Castro, M D

    2005-06-01

    The potential of visible and near infrared spectroscopy to predict texture and colour of dry-cured ham samples was investigated. Sensory evaluation was performed on 117 boned and cross-sectioned dry-cured ham samples. Slices of approximate thickness 4cm were cut, vacuum-packaged and kept under frozen storage until spectral analysis. Then, Biceps femoris muscle from the thawed slices was taken and scanned (400-2200nm) using a fiber optic probe. The exploratory analysis using principal component analysis shows that there are two ham groups according to the appearance or not of defects. Then, a K nearest neighbours was used to classify dry-cured hams into defective or no defective classes. The overall accuracy of the classification as a function of pastiness was 88.5%; meanwhile, according to colour was 79.7%. Partial least squares regression was used to formulate prediction equations for pastiness and colour. The correlation coefficients of calibration and cross-validation were 0.97 and 0.86 for optimal equation predicting pastiness, and 0.82 and 0.69 for optimal equation predicting colour. The standard error of cross-validation for predicting pastiness and colour is between 1 and 2 times the standard deviation of the reference method (the error involved in the sensory evaluation by the experts). The magnitude of this error demonstrates the good precision of the methods for predicting pastiness and colour. Furthermore, the samples were classified into defective or no defective classes, with a correct classification of 94.2% according to pasty texture evaluation and 75.7% as regard to colour evaluation.

  11. Error Estimation of Pathfinder Version 5.3 SST Level 3C Using Three-way Error Analysis

    NASA Astrophysics Data System (ADS)

    Saha, K.; Dash, P.; Zhao, X.; Zhang, H. M.

    2017-12-01

    One of the essential climate variables for monitoring as well as detecting and attributing climate change, is Sea Surface Temperature (SST). A long-term record of global SSTs are available with observations obtained from ships in the early days to the more modern observation based on in-situ as well as space-based sensors (satellite/aircraft). There are inaccuracies associated with satellite derived SSTs which can be attributed to the errors associated with spacecraft navigation, sensor calibrations, sensor noise, retrieval algorithms, and leakages due to residual clouds. Thus it is important to estimate accurate errors in satellite derived SST products to have desired results in its applications.Generally for validation purposes satellite derived SST products are compared against the in-situ SSTs which have inaccuracies due to spatio/temporal inhomogeneity between in-situ and satellite measurements. A standard deviation in their difference fields usually have contributions from both satellite as well as the in-situ measurements. A real validation of any geophysical variable must require the knowledge of the "true" value of the said variable. Therefore a one-to-one comparison of satellite based SST with in-situ data does not truly provide us the real error in the satellite SST and there will be ambiguity due to errors in the in-situ measurements and their collocation differences. A Triple collocation (TC) or three-way error analysis using 3 mutually independent error-prone measurements, can be used to estimate root-mean square error (RMSE) associated with each of the measurements with high level of accuracy without treating any one system a perfectly-observed "truth". In this study we are estimating the absolute random errors associated with Pathfinder Version 5.3 Level-3C SST product Climate Data record. Along with the in-situ SST data, the third source of dataset used for this analysis is the AATSR reprocessing of climate (ARC) dataset for the corresponding period. All three SST observations are collocated, and statistics of difference between each pair is estimated. Instead of using a traditional TC analysis we have implemented the Extended Triple Collocation (ETC) approach to estimate the correlation coefficient of each measurement system w.r.t. the unknown target variable along with their RMSE.

  12. Near-infrared spectroscopic analysis of the breaking force of extended-release matrix tablets prepared by roller-compaction: influence of plasticizer levels and sintering temperature.

    PubMed

    Dave, Vivek S; Fahmy, Raafat M; Hoag, Stephen W

    2015-06-01

    The aim of this study was to investigate the feasibility of near-infrared (NIR) spectroscopy for the determination of the influence of sintering temperature and plasticizer levels on the breaking force of extended-release matrix tablets prepared via roller-compaction. Six formulations using theophylline as a model drug, Eudragit® RL PO or Eudragit® RS PO as a matrix former and three levels of TEC (triethyl citrate) as a plasticizer were prepared. The powder blend was roller compacted using a fixed roll-gap of 1.5 mm, feed screw speed to roller speed ratio of 5:1 and roll pressure of 4 MPa. The granules, after removing fines, were compacted into tablets on a Stokes B2 rotary tablet press at a compression force of 7 kN. The tablets were thermally treated at different temperatures (Room Temperature, 50, 75 and 100 °C) for 5 h. These tablets were scanned in reflectance mode in the wavelength range of 400-2500 nm and were evaluated for breaking force. Tablet breaking force significantly increased with increasing plasticizer levels and with increases in the sintering temperature. An increase in tablet hardness produced an upward shift (increase in absorbance) in the NIR spectra. The principle component analysis (PCA) of the spectra was able to distinguish samples with different plasticizer levels and sintering temperatures. In addition, a 9-factor partial least squares (PLS) regression model for tablets containing Eudragit® RL PO had an r(2) of 0.9797, a standard error of calibration of 0.6255 and a standard error of cross validation (SECV) of 0.7594. Similar analysis of tablets containing Eudragit® RS PO showed an r(2) of 0.9831, a standard error of calibration of 0.9711 and an SECV of 1.192.

  13. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    PubMed

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  14. Chair rise transfer detection and analysis using a pendant sensor: an algorithm for fall risk assessment in older people.

    PubMed

    Zhang, Wei; Regterschot, G Ruben H; Wahle, Fabian; Geraedts, Hilde; Baldus, Heribert; Zijlstra, Wiebren

    2014-01-01

    Falls result in substantial disability, morbidity, and mortality among older people. Early detection of fall risks and timely intervention can prevent falls and injuries due to falls. Simple field tests, such as repeated chair rise, are used in clinical assessment of fall risks in older people. Development of on-body sensors introduces potential beneficial alternatives for traditional clinical methods. In this article, we present a pendant sensor based chair rise detection and analysis algorithm for fall risk assessment in older people. The recall and the precision of the transfer detection were 85% and 87% in standard protocol, and 61% and 89% in daily life activities. Estimation errors of chair rise performance indicators: duration, maximum acceleration, peak power and maximum jerk were tested in over 800 transfers. Median estimation error in transfer peak power ranged from 1.9% to 4.6% in various tests. Among all the performance indicators, maximum acceleration had the lowest median estimation error of 0% and duration had the highest median estimation error of 24% over all tests. The developed algorithm might be feasible for continuous fall risk assessment in older people.

  15. A multisite validation of whole slide imaging for primary diagnosis using standardized data collection and analysis.

    PubMed

    Wack, Katy; Drogowski, Laura; Treloar, Murray; Evans, Andrew; Ho, Jonhan; Parwani, Anil; Montalto, Michael C

    2016-01-01

    Text-based reporting and manual arbitration for whole slide imaging (WSI) validation studies are labor intensive and do not allow for consistent, scalable, and repeatable data collection or analysis. The objective of this study was to establish a method of data capture and analysis using standardized codified checklists and predetermined synoptic discordance tables and to use these methods in a pilot multisite validation study. Fifteen case report form checklists were generated from the College of American Pathology cancer protocols. Prior to data collection, all hypothetical pairwise comparisons were generated, and a level of harm was determined for each possible discordance. Four sites with four pathologists each generated 264 independent reads of 33 cases. Preestablished discordance tables were applied to determine site by site and pooled accuracy, intrareader/intramodality, and interreader intramodality error rates. Over 10,000 hypothetical pairwise comparisons were evaluated and assigned harm in discordance tables. The average difference in error rates between WSI and glass, as compared to ground truth, was 0.75% with a lower bound of 3.23% (95% confidence interval). Major discordances occurred on challenging cases, regardless of modality. The average inter-reader agreement across sites for glass was 76.5% (weighted kappa of 0.68) and for digital it was 79.1% (weighted kappa of 0.72). These results demonstrate the feasibility and utility of employing standardized synoptic checklists and predetermined discordance tables to gather consistent, comprehensive diagnostic data for WSI validation studies. This method of data capture and analysis can be applied in large-scale multisite WSI validations.

  16. The proposed coding standard at GSFC

    NASA Technical Reports Server (NTRS)

    Morakis, J. C.; Helgert, H. J.

    1977-01-01

    As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.

  17. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  18. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  19. Accounting for dropout bias using mixed-effects models.

    PubMed

    Mallinckrodt, C H; Clark, W S; David, S R

    2001-01-01

    Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.

  20. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  1. 78 FR 17155 - Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.

  2. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  3. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    PubMed

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  4. Implementation of a standardized electronic tool improves compliance, accuracy, and efficiency of trainee-to-trainee patient care handoffs after complex general surgical oncology procedures.

    PubMed

    Clarke, Callisia N; Patel, Sameer H; Day, Ryan W; George, Sobha; Sweeney, Colin; Monetes De Oca, Georgina Avaloa; Aiss, Mohamed Ait; Grubbs, Elizabeth G; Bednarski, Brian K; Lee, Jeffery E; Bodurka, Diane C; Skibber, John M; Aloia, Thomas A

    2017-03-01

    Duty-hour regulations have increased the frequency of trainee-trainee patient handoffs. Each handoff creates a potential source for communication errors that can lead to near-miss and patient-harm events. We investigated the utility, efficacy, and trainee experience associated with implementation of a novel, standardized, electronic handoff system. We conducted a prospective intervention study of trainee-trainee handoffs of inpatients undergoing complex general surgical oncology procedures at a large tertiary institution. Preimplementation data were measured using trainee surveys and direct observation and by tracking delinquencies in charting. A standardized electronic handoff tool was created in a research electronic data capture (REDCap) database using the previously validated I-PASS methodology (illness severity, patient summary, action list, situational awareness and contingency planning, and synthesis). Electronic handoff was augmented by direct communication via phone or face-to-face interaction for inpatients deemed "watcher" or "unstable." Postimplementation handoff compliance, communication errors, and trainee work flow were measured and compared to preimplementation values using standard statistical analysis. A total of 474 handoffs (203 preintervention and 271 postintervention) were observed over the study period; 86 handoffs involved patients admitted to the surgical intensive care unit, 344 patients admitted to the surgical stepdown unit, and 44 patients on the surgery ward. Implementation of the structured electronic tool resulted in an increase in trainee handoff compliance from 73% to 96% (P < .001) and decreased errors in communication by 50% (P = .044) while improving trainee efficiency and workflow. A standardized electronic tool augmented by direct communication for higher acuity patients can improve compliance, accuracy, and efficiency of handoff communication between surgery trainees. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Evaluation of Hand Written and Computerized Out-Patient Prescriptions in Urban Part of Central Gujarat.

    PubMed

    Joshi, Anuradha; Buch, Jatin; Kothari, Nitin; Shah, Nishal

    2016-06-01

    Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient's age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice.

  6. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  7. SU-E-T-88: Comprehensive Automated Daily QA for Hypo- Fractionated Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuinness, C; Morin, O

    2014-06-01

    Purpose: The trend towards more SBRT treatments with fewer high dose fractions places increased importance on daily QA. Patient plan specific QA with 3%/3mm gamma analysis and daily output constancy checks may not be enough to guarantee the level of accuracy required for SBRT treatments. But increasing the already extensive amount of QA procedures that are required is a daunting proposition. We performed a feasibility study for more comprehensive automated daily QA that could improve the diagnostic capabilities of QA without increasing workload. Methods: We performed the study on a Siemens Artiste linear accelerator using the integrated flat panel EPID.more » We included square fields, a picket fence, overlap and representative IMRT fields to measure output, flatness, symmetry, beam center, and percent difference from the standard. We also imposed a set of machine errors: MLC leaf position, machine output, and beam steering to compare with the standard. Results: Daily output was consistent within +/− 1%. Change in steering current by 1.4% and 2.4% resulted in a 3.2% and 6.3% change in flatness. 1 and 2mm MLC leaf offset errors were visibly obvious in difference plots, but passed a 3%/3mm gamma analysis. A simple test of transmission in a picket fence can catch a leaf offset error of a single leaf by 1mm. The entire morning QA sequence is performed in less than 30 minutes and images are automatically analyzed. Conclusion: Automated QA procedures could be used to provide more comprehensive information about the machine with less time and human involvement. We have also shown that other simple tests are better able to catch MLC leaf position errors than a 3%/3mm gamma analysis commonly used for IMRT and modulated arc treatments. Finally, this information could be used to watch trends of the machine and predict problems before they lead to costly machine downtime.« less

  8. Implementation errors in the GingerALE Software: Description and recommendations.

    PubMed

    Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T

    2017-01-01

    Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Extracting latent brain states--Towards true labels in cognitive neuroscience experiments.

    PubMed

    Porbadnigk, Anne K; Görnitz, Nico; Sannelli, Claudia; Binder, Alexander; Braun, Mikio; Kloft, Marius; Müller, Klaus-Robert

    2015-10-15

    Neuroscientific data is typically analyzed based on the behavioral response of the participant. However, the errors made may or may not be in line with the neural processing. In particular in experiments with time pressure or studies where the threshold of perception is measured, the error distribution deviates from uniformity due to the structure in the underlying experimental set-up. When we base our analysis on the behavioral labels as usually done, then we ignore this problem of systematic and structured (non-uniform) label noise and are likely to arrive at wrong conclusions in our data analysis. This paper contributes a remedy to this important scenario: we present a novel approach for a) measuring label noise and b) removing structured label noise. We demonstrate its usefulness for EEG data analysis using a standard d2 test for visual attention (N=20 participants). Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Generating a Magellanic star cluster catalog with ASteCA

    NASA Astrophysics Data System (ADS)

    Perren, G. I.; Piatti, A. E.; Vázquez, R. A.

    2016-08-01

    An increasing number of software tools have been employed in the recent years for the automated or semi-automated processing of astronomical data. The main advantages of using these tools over a standard by-eye analysis include: speed (particularly for large databases), homogeneity, reproducibility, and precision. At the same time, they enable a statistically correct study of the uncertainties associated with the analysis, in contrast with manually set errors, or the still widespread practice of simply not assigning errors. We present a catalog comprising 210 star clusters located in the Large and Small Magellanic Clouds, observed with Washington photometry. Their fundamental parameters were estimated through an homogeneous, automatized and completely unassisted process, via the Automated Stellar Cluster Analysis package ( ASteCA). Our results are compared with two types of studies on these clusters: one where the photometry is the same, and another where the photometric system is different than that employed by ASteCA.

  11. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  12. Clinical implications and economic impact of accuracy differences among commercially available blood glucose monitoring systems.

    PubMed

    Budiman, Erwin S; Samant, Navendu; Resch, Ansgar

    2013-03-01

    Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors. Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. Further research is necessary to validate the findings of this model-based approach. © 2013 Diabetes Technology Society.

  13. Clinical Implications and Economic Impact of Accuracy Differences among Commercially Available Blood Glucose Monitoring Systems

    PubMed Central

    Budiman, Erwin S.; Samant, Navendu; Resch, Ansgar

    2013-01-01

    Background Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. Methods We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors.. Results Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Conclusions Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. PMID:23566995

  14. A comparison of volume clamp method-based continuous noninvasive cardiac output (CNCO) measurement versus intermittent pulmonary artery thermodilution in postoperative cardiothoracic surgery patients.

    PubMed

    Wagner, Julia Y; Körner, Annmarie; Schulte-Uentrop, Leonie; Kubik, Mathias; Reichenspurner, Hermann; Kluge, Stefan; Reuter, Daniel A; Saugel, Bernd

    2018-04-01

    The CNAP technology (CNSystems Medizintechnik AG, Graz, Austria) allows continuous noninvasive arterial pressure waveform recording based on the volume clamp method and estimation of cardiac output (CO) by pulse contour analysis. We compared CNAP-derived CO measurements (CNCO) with intermittent invasive CO measurements (pulmonary artery catheter; PAC-CO) in postoperative cardiothoracic surgery patients. In 51 intensive care unit patients after cardiothoracic surgery, we measured PAC-CO (criterion standard) and CNCO at three different time points. We conducted two separate comparative analyses: (1) CNCO auto-calibrated to biometric patient data (CNCO bio ) versus PAC-CO and (2) CNCO calibrated to the first simultaneously measured PAC-CO value (CNCO cal ) versus PAC-CO. The agreement between the two methods was statistically assessed by Bland-Altman analysis and the percentage error. In a subgroup of patients, a passive leg raising maneuver was performed for clinical indications and we present the changes in PAC-CO and CNCO in four-quadrant plots (exclusion zone 0.5 L/min) in order to evaluate the trending ability of CNCO. The mean difference between CNCO bio and PAC-CO was +0.5 L/min (standard deviation ± 1.3 L/min; 95% limits of agreement -1.9 to +3.0 L/min). The percentage error was 49%. The concordance rate was 100%. For CNCOcal, the mean difference was -0.3 L/min (±0.5 L/min; -1.2 to +0.7 L/min) with a percentage error of 19%. In this clinical study in cardiothoracic surgery patients, CNCO cal showed good agreement when compared with PAC-CO. For CNCO bio , we observed a higher percentage error and good trending ability (concordance rate 100%).

  15. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  16. Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis

    PubMed Central

    Garrison, Kathleen A.; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J.; Aziz-Zadeh, Lisa S.

    2015-01-01

    Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design. PMID:26441816

  17. Design and validation of realistic breast models for use in multiple alternative forced choice virtual clinical trials

    NASA Astrophysics Data System (ADS)

    Elangovan, Premkumar; Mackenzie, Alistair; Dance, David R.; Young, Kenneth C.; Cooke, Victoria; Wilkinson, Louise; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Wells, Kevin

    2017-04-01

    A novel method has been developed for generating quasi-realistic voxel phantoms which simulate the compressed breast in mammography and digital breast tomosynthesis (DBT). The models are suitable for use in virtual clinical trials requiring realistic anatomy which use the multiple alternative forced choice (AFC) paradigm and patches from the complete breast image. The breast models are produced by extracting features of breast tissue components from DBT clinical images including skin, adipose and fibro-glandular tissue, blood vessels and Cooper’s ligaments. A range of different breast models can then be generated by combining these components. Visual realism was validated using a receiver operating characteristic (ROC) study of patches from simulated images calculated using the breast models and from real patient images. Quantitative analysis was undertaken using fractal dimension and power spectrum analysis. The average areas under the ROC curves for 2D and DBT images were 0.51  ±  0.06 and 0.54  ±  0.09 demonstrating that simulated and real images were statistically indistinguishable by expert breast readers (7 observers); errors represented as one standard error of the mean. The average fractal dimensions (2D, DBT) for real and simulated images were (2.72  ±  0.01, 2.75  ±  0.01) and (2.77  ±  0.03, 2.82  ±  0.04) respectively; errors represented as one standard error of the mean. Excellent agreement was found between power spectrum curves of real and simulated images, with average β values (2D, DBT) of (3.10  ±  0.17, 3.21  ±  0.11) and (3.01  ±  0.32, 3.19  ±  0.07) respectively; errors represented as one standard error of the mean. These results demonstrate that radiological images of these breast models realistically represent the complexity of real breast structures and can be used to simulate patches from mammograms and DBT images that are indistinguishable from patches from the corresponding real breast images. The method can generate about 500 radiological patches (~30 mm  ×  30 mm) per day for AFC experiments on a single workstation. This is the first study to quantitatively validate the realism of simulated radiological breast images using direct blinded comparison with real data via the ROC paradigm with expert breast readers.

  18. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  19. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  20. Effect of inventory method on niche models: random versus systematic error

    Treesearch

    Heather E. Lintz; Andrew N. Gray; Bruce McCune

    2013-01-01

    Data from large-scale biological inventories are essential for understanding and managing Earth's ecosystems. The Forest Inventory and Analysis Program (FIA) of the U.S. Forest Service is the largest biological inventory in North America; however, the FIA inventory recently changed from an amalgam of different approaches to a nationally-standardized approach in...

  1. Dealing with Dependence (Part I): Understanding the Effects of Clustered Data

    ERIC Educational Resources Information Center

    McCoach, D. Betsy; Adelson, Jill L.

    2010-01-01

    This article provides a conceptual introduction to the issues surrounding the analysis of clustered (nested) data. We define the intraclass correlation coefficient (ICC) and the design effect, and we explain their effect on the standard error. When the ICC is greater than 0, then the design effect is greater than 1. In such a scenario, the…

  2. Grade 11 Students' Interconnected Use of Conceptual Knowledge, Procedural Skills, and Strategic Competence in Algebra: A Mixed Method Study of Error Analysis

    ERIC Educational Resources Information Center

    Egodawatte, Gunawardena; Stoilescu, Dorian

    2015-01-01

    The purpose of this mixed-method study was to investigate grade 11 university/college stream mathematics students' difficulties in applying conceptual knowledge, procedural skills, strategic competence, and algebraic thinking in solving routine (instructional) algebraic problems. A standardized algebra test was administered to thirty randomly…

  3. NHEXAS PHASE I MARYLAND STUDY--STANDARD OPERATING PROCEDURE FOR EXPLORATORY DATA ANALYSIS AND SUMMARY STATISTICS (D05)

    EPA Science Inventory

    This SOP describes the methods and procedures for two types of QA procedures: spot checks of hand entered data, and QA procedures for co-located and split samples. The spot checks were used to determine whether the error rate goal for the input of hand entered data was being att...

  4. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  5. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  6. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  7. On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification

    PubMed Central

    Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.

    2014-01-01

    Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362

  8. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  9. Real-time and accurate rail wear measurement method and experimental analysis.

    PubMed

    Liu, Zhen; Li, Fengjiao; Huang, Bangkui; Zhang, Guangjun

    2014-08-01

    When a train is running on uneven or curved rails, it generates violent vibrations on the rails. As a result, the light plane of the single-line structured light vision sensor is not vertical, causing errors in rail wear measurements (referred to as vibration errors in this paper). To avoid vibration errors, a novel rail wear measurement method is introduced in this paper, which involves three main steps. First, a multi-line structured light vision sensor (which has at least two linear laser projectors) projects a stripe-shaped light onto the inside of the rail. Second, the central points of the light stripes in the image are extracted quickly, and the three-dimensional profile of the rail is obtained based on the mathematical model of the structured light vision sensor. Then, the obtained rail profile is transformed from the measurement coordinate frame (MCF) to the standard rail coordinate frame (RCF) by taking the three-dimensional profile of the measured rail waist as the datum. Finally, rail wear constraint points are adopted to simplify the location of the rail wear points, and the profile composed of the rail wear points are compared with the standard rail profile in RCF to determine the rail wear. Both real data experiments and simulation experiments show that the vibration errors can be eliminated when the proposed method is used.

  10. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    NASA Astrophysics Data System (ADS)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  11. The addition of low-dose-rate brachytherapy and androgen-deprivation therapy decreases biochemical failure and prostate cancer death compared with dose-escalated external-beam radiation therapy for high-risk prostate cancer.

    PubMed

    Shilkrut, Mark; Merrick, Gregory S; McLaughlin, P William; Stenmark, Matthew H; Abu-Isa, Eyad; Vance, Sean M; Sandler, Howard M; Feng, Felix Y; Hamstra, Daniel A

    2013-02-01

    The objective of this study was to determine whether the addition of low-dose-rate brachytherapy or androgen-deprivation therapy (ADT) improves clinical outcome in patients with high-risk prostate cancer (HiRPCa) who received dose-escalated radiotherapy (RT). Between 1995 and 2010, 958 patients with HiRPCa were treated at Schiffler Cancer Center (n = 484) or at the University of Michigan (n = 474) by receiving either dose-escalated external-beam RT (EBRT) (n = 510; minimum prescription dose, 75 grays [Gy]; median dose, 78 Gy) or combined-modality RT (CMRT) consisting of (103) Pd implants (n = 369) or (125) I implants (n = 79) both with pelvic irradiation (median prescription dose, 45 Gy). The cumulative incidences of biochemical failure (BF) and prostate cancer-specific mortality (PCSM) were estimated by using the Kaplan-Meier method and Fine and Gray regression analysis. The median follow-up was 63.2 months (interquartile range, 35.4-99.0 months), and 250 patients were followed for >8 years. Compared with CMRT, patients who received EBRT had higher prostate-specific antigen levels, higher tumor classification, lower Gleason sum, and more frequent receipt of ADT for a longer duration. The 8-year incidence BF and PCSM among patients who received EBRT was 40% (standard error, 38%-44%) and 13% (standard error, 11%-15%) compared with 14% (standard error, 12%-16%; P < .0001) and 7% (standard error 6%-9%; P = .003) among patients who received CMRT. On multivariate analysis, the hazard ratios (HRs) for BF and PCSM were 0.35 (95% confidence interval [CI], 0.23-0.52; P < .0001) and 0.41 (95% CI, 0.23-0.75; P < .003), favoring CMRT. Increasing duration of ADT predicted decreased BF (P = .04) and PCSM (P = .001), which was greatest with long-term ADT (BF: HR, 0.33; P < .0001; 95% CI, 0.21-0.52; PCSM: HR, 0.30; P = .001; 95% CI, 0.15-0.6) even in the subgroup that received CMRT. In this retrospective comparison, both low-dose-rate brachytherapy boost and ADT were associated with decreased risks of BF and PCSM compared with EBRT. Copyright © 2012 American Cancer Society.

  12. Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range

    NASA Astrophysics Data System (ADS)

    Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong

    2011-12-01

    Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.

  13. Computer-aided field editing in DHS: the Turkey experiment.

    PubMed

    1995-01-01

    A study comparing field editing using a Notebook computer, computer-aided field editing (CAFE), with that done manually in the standard manner, during the 1993 Demographic and Health Survey (DHS) in Turkey, demonstrated that there was less missing data and a lower mean number of errors for teams using CAFE. 6 of 13 teams used CAFE in the Turkey experiment; the computers were equipped with Integrated System for Survey Analysis (ISSA) software for editing the DHS questionnaires. The CAFE teams completed 2466 out of 8619 household questionnaires and 1886 out of 6649 individual questionnaires. The CAFE team editor entered data into the computer and marked any detected errors on the questionnaire; the errors were then corrected by the editor, in the field, based on other responses in the questionnaire, or on corrections made by the interviewer to which the questionnaire was returned. Errors in questionnaires edited manually are not identified until they are sent to the survey office for data processing, when it is too late to ask for clarification from respondents. There was one area where the error rate was higher for CAFE teams; the CAFE editors paid less attention to errors presented as warnings only.

  14. Accuracy of CT-based attenuation correction in PET/CT bone imaging

    NASA Astrophysics Data System (ADS)

    Abella, Monica; Alessio, Adam M.; Mankoff, David A.; MacDonald, Lawrence R.; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E.

    2012-05-01

    We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a 68Ga/68Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9 ± 0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers ranges from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important.

  15. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  16. Assessing dental caries prevalence in African-American youth and adults.

    PubMed

    Seibert, Wilda; Farmer-Dixon, Cherae; Bolden, Theodore E; Stewart, James H

    2004-01-01

    It has been well documented that dental caries affect millions of children in the USA with the majority experiencing decay by the late teens. This is especially true for low-income minorities. The objective of this descriptive study was to determine dental caries prevalence in a sample of low-income African-American youth and adults. A total of 1034 individuals were examined. They were divided into two age groups: youth, 9-19 years and adults, 20-39 years. Females comprised approximately 65 percent (64.5) of the study group. The DMFT Index was used to determine caries prevalence in this study population. The DMFT findings showed that approximately 73 percent (72.9 percent) of the youth had either decayed, missing or filled teeth. Male youth had slightly higher DMFT mean scores than female youth: male mean = 7.93, standard error = 0.77, female mean = 7.52, standard error = 0.36; however, as females reached adulthood their DMFT scores increased substantially, mean = 15.18, standard error = 0.36. Caries prevalence was much lower in male adults, DMFT, mean = 7.22, standard error of 0.33. The decayed component for female adults mean score was 6.81, a slight increase over adult males, mean = 6.58. Although there were few filled teeth in both age groups, female adults had slightly more filled teeth than male adults, females mean = 2.91 vs. males; however, adult males experienced slightly more missing teeth, mean = 5.62 as compared to adult females, mean = 5.46. n = 2.20. Both female and male adults had an increase in missing teeth. As age increased there was a significant correlation among decayed, missing and filled teeth as tested by Analysis of Variance (ANOVA), p < 0.01. A significant correlation was found between filled teeth by sex, p < .005. We conclude that caries prevalence was higher in female and male youth, but dental caries increased more rapidly in females as they reached adulthood.

  17. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    PubMed

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  18. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science

    PubMed Central

    Veldkamp, Coosje L. S.; Nuijten, Michèle B.; Dominguez-Alvarez, Linda; van Assen, Marcel A. L. M.; Wicherts, Jelte M.

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors. PMID:25493918

  19. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Re-assessing acalculia: Distinguishing spatial and purely arithmetical deficits in right-hemisphere damaged patients.

    PubMed

    Benavides-Varela, S; Piva, D; Burgio, F; Passarini, L; Rolma, G; Meneghello, F; Semenza, C

    2017-03-01

    Arithmetical deficits in right-hemisphere damaged patients have been traditionally considered secondary to visuo-spatial impairments, although the exact relationship between the two deficits has rarely been assessed. The present study implemented a voxelwise lesion analysis among 30 right-hemisphere damaged patients and a controlled, matched-sample, cross-sectional analysis with 35 cognitively normal controls regressing three composite cognitive measures on standardized numerical measures. The results showed that patients and controls significantly differ in Number comprehension, Transcoding, and Written operations, particularly subtractions and multiplications. The percentage of patients performing below the cutoffs ranged between 27% and 47% across these tasks. Spatial errors were associated with extensive lesions in fronto-temporo-parietal regions -which frequently lead to neglect- whereas pure arithmetical errors appeared related to more confined lesions in the right angular gyrus and its proximity. Stepwise regression models consistently revealed that spatial errors were primarily predicted by composite measures of visuo-spatial attention/neglect and representational abilities. Conversely, specific errors of arithmetic nature linked to representational abilities only. Crucially, the proportion of arithmetical errors (ranging from 65% to 100% across tasks) was higher than that of spatial ones. These findings thus suggest that unilateral right hemisphere lesions can directly affect core numerical/arithmetical processes, and that right-hemisphere acalculia is not only ascribable to visuo-spatial deficits as traditionally thought. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  2. IgRepertoireConstructor: a novel algorithm for antibody repertoire construction and immunoproteogenomics analysis.

    PubMed

    Safonova, Yana; Bonissone, Stefano; Kurpilyansky, Eugene; Starostina, Ekaterina; Lapidus, Alla; Stinson, Jeremy; DePalatis, Laura; Sandoval, Wendy; Lill, Jennie; Pevzner, Pavel A

    2015-06-15

    The analysis of concentrations of circulating antibodies in serum (antibody repertoire) is a fundamental, yet poorly studied, problem in immunoinformatics. The two current approaches to the analysis of antibody repertoires [next generation sequencing (NGS) and mass spectrometry (MS)] present difficult computational challenges since antibodies are not directly encoded in the germline but are extensively diversified by somatic recombination and hypermutations. Therefore, the protein database required for the interpretation of spectra from circulating antibodies is custom for each individual. Although such a database can be constructed via NGS, the reads generated by NGS are error-prone and even a single nucleotide error precludes identification of a peptide by the standard proteomics tools. Here, we present the IgRepertoireConstructor algorithm that performs error-correction of immunosequencing reads and uses mass spectra to validate the constructed antibody repertoires. IgRepertoireConstructor is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from http://bioinf.spbau.ru/igtools. ppevzner@ucsd.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  3. Epidermis and Enamel

    PubMed Central

    Barsley, Robert E.; Bernstein, Mark L.; Brumit, Paula C.; Dorion, Robert B.J.; Golden, Gregory S.; Lewis, James M.; McDowell, John D.; Metcalf, Roger D.; Senn, David R.; Sweet, David; Weems, Richard A.

    2018-01-01

    Abstract Critics describe forensic dentists' management of bitemark evidence as junk science with poor sensitivity and specificity and state that linkages to a biter are unfounded. Those vocal critics, supported by certain media, characterize odontologists' previous errors as egregious and petition government agencies to render bitemark evidence inadmissible. Odontologists acknowledge that some practitioners have made past mistakes. However, it does not logically follow that the errors of a few identify a systemic failure of bitemark analysis. Scrutiny of the contentious cases shows that most occurred 20 to 40 years ago. Since then, research has been ongoing and more conservative guidelines, standards, and terminology have been adopted so that past errors are no longer reflective of current safeguards. The authors recommend a comprehensive root analysis of problem cases to be used to determine all the factors that contributed to those previous problems. The legal community also shares responsibility for some of the past erroneous convictions. Currently, most proffered bitemark cases referred to odontologists do not reach courts because those forensic dentists dismiss them as unacceptable or insufficient for analysis. Most bitemark evidence cases have been properly managed by odontologists. Bitemark evidence and testimony remain relevant and have made significant contributions in the justice system. PMID:29557817

  4. Epidermis and Enamel: Insights Into Gnawing Criticisms of Human Bitemark Evidence.

    PubMed

    Barsley, Robert E; Bernstein, Mark L; Brumit, Paula C; Dorion, Robert B J; Golden, Gregory S; Lewis, James M; McDowell, John D; Metcalf, Roger D; Senn, David R; Sweet, David; Weems, Richard A

    2018-06-01

    Critics describe forensic dentists' management of bitemark evidence as junk science with poor sensitivity and specificity and state that linkages to a biter are unfounded. Those vocal critics, supported by certain media, characterize odontologists' previous errors as egregious and petition government agencies to render bitemark evidence inadmissible. Odontologists acknowledge that some practitioners have made past mistakes. However, it does not logically follow that the errors of a few identify a systemic failure of bitemark analysis. Scrutiny of the contentious cases shows that most occurred 20 to 40 years ago. Since then, research has been ongoing and more conservative guidelines, standards, and terminology have been adopted so that past errors are no longer reflective of current safeguards. The authors recommend a comprehensive root analysis of problem cases to be used to determine all the factors that contributed to those previous problems. The legal community also shares responsibility for some of the past erroneous convictions. Currently, most proffered bitemark cases referred to odontologists do not reach courts because those forensic dentists dismiss them as unacceptable or insufficient for analysis. Most bitemark evidence cases have been properly managed by odontologists. Bitemark evidence and testimony remain relevant and have made significant contributions in the justice system.

  5. Standardized Protocol for Virtual Surgical Plan and 3-Dimensional Surgical Template-Assisted Single-Stage Mandible Contour Surgery.

    PubMed

    Fu, Xi; Qiao, Jia; Girod, Sabine; Niu, Feng; Liu, Jian Feng; Lee, Gordon K; Gui, Lai

    2017-09-01

    Mandible contour surgery, including reduction gonioplasty and genioplasty, has become increasingly popular in East Asia. However, it is technically challenging and, hence, leads to a long learning curve and high complication rates and often needs secondary revisions. The increasing use of 3-dimensional (3D) technology makes accurate single-stage mandible contour surgery with minimum complication rates possible with a virtual surgical plan (VSP) and 3-D surgical templates. This study is to establish a standardized protocol for VSP and 3-D surgical templates-assisted mandible contour surgery and evaluate the accuracy of the protocol. In this study, we enrolled 20 patients for mandible contour surgery. Our protocol is to perform VSP based on 3-D computed tomography data. Then, design and 3-D print surgical templates based on preoperative VSP. The accuracy of the method was analyzed by 3-D comparison of VSP and postoperative results using detailed computer analysis. All patients had symmetric, natural osteotomy lines and satisfactory facial ratios in a single-stage operation. The average relative error of VSP and postoperative result on the entire skull was 0.41 ± 0.13 mm. The average new left gonial error was 0.43 ± 0.77 mm. The average new right gonial error was 0.45 ± 0.69 mm. The average pognion error was 0.79 ± 1.21 mm. Patients were very satisfied with the aesthetic results. Surgeons were very satisfied with the performance of surgical templates to facilitate the operation. Our standardized protocol of VSP and 3-D printed surgical templates-assisted single-stage mandible contour surgery results in accurate, safe, and predictable outcome in a single stage.

  6. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  7. Bayesian Analysis of Silica Exposure and Lung Cancer Using Human and Animal Studies.

    PubMed

    Bartell, Scott M; Hamra, Ghassan Badri; Steenland, Kyle

    2017-03-01

    Bayesian methods can be used to incorporate external information into epidemiologic exposure-response analyses of silica and lung cancer. We used data from a pooled mortality analysis of silica and lung cancer (n = 65,980), using untransformed and log-transformed cumulative exposure. Animal data came from chronic silica inhalation studies using rats. We conducted Bayesian analyses with informative priors based on the animal data and different cross-species extrapolation factors. We also conducted analyses with exposure measurement error corrections in the absence of a gold standard, assuming Berkson-type error that increased with increasing exposure. The pooled animal data exposure-response coefficient was markedly higher (log exposure) or lower (untransformed exposure) than the coefficient for the pooled human data. With 10-fold uncertainty, the animal prior had little effect on results for pooled analyses and only modest effects in some individual studies. One-fold uncertainty produced markedly different results for both pooled and individual studies. Measurement error correction had little effect in pooled analyses using log exposure. Using untransformed exposure, measurement error correction caused a 5% decrease in the exposure-response coefficient for the pooled analysis and marked changes in some individual studies. The animal prior had more impact for smaller human studies and for one-fold versus three- or 10-fold uncertainty. Adjustment for Berkson error using Bayesian methods had little effect on the exposure-response coefficient when exposure was log transformed or when the sample size was large. See video abstract at, http://links.lww.com/EDE/B160.

  8. Accuracy and Spatial Variability in GPS Surveying for Landslide Mapping on Road Inventories at a Semi-Detailed Scale: the Case in Colombia

    NASA Astrophysics Data System (ADS)

    Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.

    2016-06-01

    The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.

  9. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    USGS Publications Warehouse

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  10. BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.

    2013-03-20

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less

  11. [Hypertrophic cardiomyopathy: an intraoperative death case analysis and substantiation of the standards of perioperative anesthetic management in a non-cardiosurgery clinic].

    PubMed

    Fedosiuk, Roman N; Shchupachynska, Liliia O

    2018-01-01

    The article is based on the case analysis of a sudden and unexpected intraoperative death of a 51-year-old female patient with hypertrophic cardiomyopathy, who was undergoing a non-cardiac operation in a non-cardiosurgery clinic, from acute precipitation of left ventricular outflow tract obstruction provoked by surgery and anesthesia. It emphasizes the importance of raising non-cardiac anesthesiologists' awareness of the issue and having clear standards of pre-operative evaluation and perioperative management of patients with hypertrophic cardiomyopathy in order to avoid fatal medical errors. A literature review on the disease with an accent on anesthesia-related issues is also given, and four standards of perioperative anesthetic management of patients with hypertrophic cardiomyopathy presenting for non-cardiac surgery in general hospital settings are developed and offered.

  12. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  13. Dynamic load-sharing characteristic analysis of face gear power-split gear system based on tooth contact characteristics

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Hu, Yahui

    2018-04-01

    The bend-torsion coupling dynamics load-sharing model of the helicopter face gear split torque transmission system is established by using concentrated quality standard, to analyzing the dynamic load-sharing characteristic. The mathematical models include nonlinear support stiffness, time-varying meshing stiffness, damping, gear backlash. The results showed that the errors collectively influenced the load sharing characteristics, only reduce a certain error, it is never fully reached the perfect loading sharing characteristics. The system load-sharing performance can be improved through floating shaft support. The above-method will provide a theoretical basis and data support for its dynamic performance optimization design.

  14. The NBS scale of radiance temperature

    NASA Technical Reports Server (NTRS)

    Waters, William R.; Walker, James H.; Hattenburg, Albert T.

    1988-01-01

    The measurement methods and instrumentation used in the realization and transfer of the International Practical Temperature Scale (IPTS-68) above the temperature of freezing gold are described. The determination of the ratios of spectral radiance of tungsten-strip lamps to a gold-point blackbody at a wavelength of 654.6 nm is detailed. The response linearity, spectral responsivity, scattering error, and polarization properties of the instrumentation are described. The analysis of the sources of error and estimates of uncertainty are presented. The assigned uncertainties (three standard deviations) in radiance temperature range from + or - 2 K at 2573 K to + or - 0.5 K at 1073 K.

  15. SU-E-CAMPUS-J-05: Quantitative Investigation of Random and Systematic Uncertainties From Hardware and Software Components in the Frameless 6DBrainLAB ExacTrac System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeling, V; Jin, H; Hossain, S

    2014-06-15

    Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less

  16. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  17. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  18. Conditional standard errors of measurement for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

    PubMed

    Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun

    2006-02-01

    A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

  19. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  20. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  1. Methods for determining and processing 3D errors and uncertainties for AFM data analysis

    NASA Astrophysics Data System (ADS)

    Klapetek, P.; Nečas, D.; Campbellová, A.; Yacoot, A.; Koenders, L.

    2011-02-01

    This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion.

  2. Ultraspectral sounding retrieval error budget and estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping

    2011-11-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).

  3. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    PubMed

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  5. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  6. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  7. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  8. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  9. Cosmographic analysis with Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  10. TEMORA 1: A new zircon standard for Phanerozoic U-Pb geochronology

    USGS Publications Warehouse

    Black, L.P.; Kamo, S.L.; Allen, C.M.; Aleinikoff, J.N.; Davis, D.W.; Korsch, R.J.; Foudoulis, C.

    2003-01-01

    The role of the standard is critical to the derivation of reliable U-Pb zircon ages by micro-beam analysis. For maximum reliability, it is critically important that the utilised standard be homogeneous at all scales of analysis. It is equally important that the standard has been precisely and accurately dated by an independent technique. This study reports the emergence of a new zircon standard that meets those criteria, as demonstrated by Sensitive High Resolution Ion MicroProbe (SHRIMP), isotope dilution thermal ionisation mass-spectrometry (IDTIMS) and excimer laser ablation- inductively coupled plasma-mass-spectrometry (ELA-ICP-MS) documentation. The TEMORA 1 zircon standard derives from the Middledale Gabbroic Diorite, a high-level mafic stock within the Palaeozoic Lachlan Orogen of eastern Australia. Its 206Pb/238U IDTIMS age has been determined to be 416.75??0.24 Ma (95% confidence limits), based on measurement errors alone. Spike-calibration uncertainty limits the accuracy to 416.8??1.1 Ma for U-Pb intercomparisons between different laboratories that do not use a common spike. ?? 2003 Published by Elsevier Science B.V. All rights reserved.

  11. General Aviation Avionics Statistics.

    DTIC Science & Technology

    1980-12-01

    designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42

  12. Prevention of medication errors: detection and audit.

    PubMed

    Montesi, Germana; Lechi, Alessandro

    2009-06-01

    1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.

  13. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  14. Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.

    ERIC Educational Resources Information Center

    Schretlen, David; And Others

    1994-01-01

    Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

  15. A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13

    ERIC Educational Resources Information Center

    Holdzkom, David; Sumner, Brian; McMillen, Brad

    2010-01-01

    In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…

  16. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    PubMed

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.

  17. The Second International Standard for Penicillin*

    PubMed Central

    Humphrey, J. H.; Mussett, M. V.; Perry, W. L. M.

    1953-01-01

    In 1950 the Department of Biological Standards, National Institute for Medical Research, London, was authorized by the WHO Expert Committee on Biological Standardization to prepare the Second International Standard for Penicillin. A single batch of specially recrystallized sodium penicillin G was obtained and 11 laboratories in seven different countries were requested to take part in its collaborative assay. 112 assays were carried out, of which 101 were done by cup-plate methods using either Staphylococcus aureus or Bacillus subtilis. The results were subjected to standard methods of analysis, on the basis of which the authors define the Second International Standard for Penicillin as containing 1,670 International Units (IU) per mg, with limits of error (P = 0.05) of 1,666-1,674 IU/mg. The International Unit is therefore redefined as the activity contained in 0.0005988 mg of the Second International Standard for Penicillin. PMID:13082387

  18. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  19. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    PubMed

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  20. X-ray fluorescence determination of Sn, Sb, Pb in lead-based bearing alloys using a solution technique

    NASA Astrophysics Data System (ADS)

    Tian, Lunfu; Wang, Lili; Gao, Wei; Weng, Xiaodong; Liu, Jianhui; Zou, Deshuang; Dai, Yichun; Huang, Shuke

    2018-03-01

    For the quantitative analysis of the principal elements in lead-antimony-tin alloys, directly X-ray fluorescence (XRF) method using solid metal disks introduces considerable errors due to the microstructure inhomogeneity. To solve this problem, an aqueous solution XRF method is proposed for determining major amounts of Sb, Sn, Pb in lead-based bearing alloys. The alloy samples were dissolved by a mixture of nitric acid and tartaric acid to eliminated the effects of microstructure of these alloys on the XRF analysis. Rh Compton scattering was used as internal standard for Sb and Sn, and Bi was added as internal standard for Pb, to correct for matrix effects, instrumental and operational variations. High-purity lead, antimony and tin were used to prepare synthetic standards. Using these standards, calibration curves were constructed for the three elements after optimizing the spectrometer parameters. The method has been successfully applied to the analysis of lead-based bearing alloys and is more rapid than classical titration methods normally used. The determination results are consistent with certified values or those obtained by titrations.

  1. Sample sizes needed for specified margins of relative error in the estimates of the repeatability and reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2005-01-01

    Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.

  2. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-01-05

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  4. Some computational techniques for estimating human operator describing functions

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1986-01-01

    Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.

  5. Latent Profiles of Reading and Language and Their Association with Standardized Reading Outcomes in Kindergarten through 10th Grade

    ERIC Educational Resources Information Center

    Foorman, Barbara R.; Petscher, Yaacov; Stanley, Christopher

    2016-01-01

    The idea of targeting reading instruction to profiles of students' strengths and weaknesses in component skills is central to teaching. However, these profiles are often based on unreliable descriptions of students' oral reading errors, text reading levels, or learning profiles. This research utilized latent profile analysis (LPA) to examine…

  6. Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files

    DOE Data Explorer

    John Shervais

    2015-10-09

    Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.

  7. A Field-Portable Cell Analyzer without a Microscope and Reagents.

    PubMed

    Seo, Dongmin; Oh, Sangwoo; Lee, Moonjin; Hwang, Yongha; Seo, Sungkyu

    2017-12-29

    This paper demonstrates a commercial-level field-portable lens-free cell analyzer called the NaviCell (No-stain and Automated Versatile Innovative cell analyzer) capable of automatically analyzing cell count and viability without employing an optical microscope and reagents. Based on the lens-free shadow imaging technique, the NaviCell (162 × 135 × 138 mm³ and 1.02 kg) has the advantage of providing analysis results with improved standard deviation between measurement results, owing to its large field of view. Importantly, the cell counting and viability testing can be analyzed without the use of any reagent, thereby simplifying the measurement procedure and reducing potential errors during sample preparation. In this study, the performance of the NaviCell for cell counting and viability testing was demonstrated using 13 and six cell lines, respectively. Based on the results of the hemocytometer ( de facto standard), the error rate (ER) and coefficient of variation (CV) of the NaviCell are approximately 3.27 and 2.16 times better than the commercial cell counter, respectively. The cell viability testing of the NaviCell also showed an ER and CV performance improvement of 5.09 and 1.8 times, respectively, demonstrating sufficient potential in the field of cell analysis.

  8. [A basic research to share Fourier transform near-infrared spectrum information resource].

    PubMed

    Zhang, Lu-Da; Li, Jun-Hui; Zhao, Long-Lian; Zhao, Li-Li; Qin, Fang-Li; Yan, Yan-Lu

    2004-08-01

    A method to share the information resource in the database of Fourier transform near-infrared(FTNIR) spectrum information of agricultural products and utilize the spectrum information sufficiently is explored in this paper. Mapping spectrum information from one instrument to another is studied to express the spectrum information accurately between the instruments. Then mapping spectrum information is used to establish a mathematical model of quantitative analysis without including standard samples. The analysis result is that the relative coefficient r is 0.941 and the relative error is 3.28% between the model estimate values and the Kjeldahl's value for the protein content of twenty-two wheat samples, while the relative coefficient r is 0.963 and the relative error is 2.4% for the other model, which is established by using standard samples. It is shown that the spectrum information can be shared by using the mapping spectrum information. So it can be concluded that the spectrum information in one FTNIR spectrum information database can be transformed to another instrument's mapping spectrum information, which makes full use of the information resource in the database of FTNIR spectrum information to realize the resource sharing between different instruments.

  9. An analysis of Landsat-4 Thematic Mapper geometric properties

    NASA Technical Reports Server (NTRS)

    Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gohkman, B.; Friedman, S. Z.; Logan, T. L.

    1984-01-01

    Landsat-4 Thematic Mapper data of Washington, DC, Harrisburg, PA, and Salton Sea, CA were analyzed to determine geometric integrity and conformity of the data to known earth surface geometry. Several tests were performed. Intraband correlation and interband registration were investigated. No problems were observed in the intraband analysis, and aside from indications of slight misregistration between bands of the primary versus bands of the secondary focal planes, interband registration was well within the specified tolerances. A substantial number of ground control points were found and used to check the images' conformity to the Space Oblique Mercator (SOM) projection of their respective areas. The means of the residual offsets, which included nonprocessing related measurement errors, were close to the one pixel level in the two scenes examined. The Harrisburg scene residual mean was 28.38 m (0.95 pixels) with a standard deviation of 19.82 m (0.66 pixels), while the mean and standard deviation for the Salton Sea scene were 40.46 (1.35 pixels) and 30.57 m (1.02 pixels), respectively. Overall, the data were judged to be a high geometric quality with errors close to those targeted by the TM sensor design specifications.

  10. Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-01-15

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  11. Unit of Measurement Used and Parent Medication Dosing Errors

    PubMed Central

    Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.

    2014-01-01

    BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742

  12. Unit of measurement used and parent medication dosing errors.

    PubMed

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  13. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  14. Time-order errors and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.

    PubMed

    Hellström, Åke; Rammsayer, Thomas H

    2015-10-01

    Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

  15. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  16. Intra-rater reliability of hallux flexor strength measures using the Nintendo Wii Balance Board.

    PubMed

    Quek, June; Treleaven, Julia; Brauer, Sandra G; O'Leary, Shaun; Clark, Ross A

    2015-01-01

    The purpose of this study was to investigate the intra-rater reliability of a new method in combination with the Nintendo Wii Balance Board (NWBB) to measure the strength of hallux flexor muscle. Thirty healthy individuals (age: 34.9 ± 12.9 years, height: 170.4 ± 10.5 cm, weight: 69.3 ± 15.3 kg, female = 15) participated. Repeated testing was completed within 7 days. Participants performed strength testing in sitting using a wooden platform in combination with the NWBB. This new method was set up to selectively recruit an intrinsic muscle of the foot, specifically the flexor hallucis brevis muscle. Statistical analysis was performed using intra-class coefficients and ordinary least product analysis. To estimate measurement error, standard error of measurement (SEM), minimal detectable change (MDC) and percentage error were calculated. Results indicate excellent intra-rater reliability (ICC = 0.982, CI = 0.96-0.99) with an absence of systematic bias. SEM, MDC and percentage error value were 0.5, 1.4 and 12 % respectively. This study demonstrates that a new method in combination with the NWBB application is reliable to measure hallux flexor strength and has potential to be used for future research and clinical application.

  17. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less

  18. A guide to evaluating linkage quality for the analysis of linked data.

    PubMed

    Harron, Katie L; Doidge, James C; Knight, Hannah E; Gilbert, Ruth E; Goldstein, Harvey; Cromwell, David A; van der Meulen, Jan H

    2017-10-01

    Linked datasets are an important resource for epidemiological and clinical studies, but linkage error can lead to biased results. For data security reasons, linkage of personal identifiers is often performed by a third party, making it difficult for researchers to assess the quality of the linked dataset in the context of specific research questions. This is compounded by a lack of guidance on how to determine the potential impact of linkage error. We describe how linkage quality can be evaluated and provide widely applicable guidance for both data providers and researchers. Using an illustrative example of a linked dataset of maternal and baby hospital records, we demonstrate three approaches for evaluating linkage quality: applying the linkage algorithm to a subset of gold standard data to quantify linkage error; comparing characteristics of linked and unlinked data to identify potential sources of bias; and evaluating the sensitivity of results to changes in the linkage procedure. These approaches can inform our understanding of the potential impact of linkage error and provide an opportunity to select the most appropriate linkage procedure for a specific analysis. Evaluating linkage quality in this way will improve the quality and transparency of epidemiological and clinical research using linked data. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.

  19. Comparison of Flow-Dependent and Static Error Correlation Models in the DAO Ozone Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Wargan, K.; Stajner, I.; Pawson, S.

    2003-01-01

    In a data assimilation system the forecast error covariance matrix governs the way in which the data information is spread throughout the model grid. Implementation of a correct method of assigning covariances is expected to have an impact on the analysis results. The simplest models assume that correlations are constant in time and isotropic or nearly isotropic. In such models the analysis depends on the dynamics only through assumed error standard deviations. In applications to atmospheric tracer data assimilation this may lead to inaccuracies, especially in regions with strong wind shears or high gradient of potential vorticity, as well as in areas where no data are available. In order to overcome this problem we have developed a flow-dependent covariance model that is based on short term evolution of error correlations. The presentation compares performance of a static and a flow-dependent model applied to a global three- dimensional ozone data assimilation system developed at NASA s Data Assimilation Office. We will present some results of validation against WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres and in the troposphere. We will also discuss statistical characteristics of both models; in particular we will argue that including evolution of error correlations leads to stronger internal consistency of a data assimilation ,

  20. In vitro quantification of the performance of model-based mono-planar and bi-planar fluoroscopy for 3D joint kinematics estimation.

    PubMed

    Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita

    2013-03-01

    Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.

  1. Prevalence of refractive error in Europe: the European Eye Epidemiology (E(3)) Consortium.

    PubMed

    Williams, Katie M; Verhoeven, Virginie J M; Cumberland, Phillippa; Bertelsen, Geir; Wolfram, Christian; Buitendijk, Gabriëlle H S; Hofman, Albert; van Duijn, Cornelia M; Vingerling, Johannes R; Kuijpers, Robert W A M; Höhn, René; Mirshahi, Alireza; Khawaja, Anthony P; Luben, Robert N; Erke, Maja Gran; von Hanno, Therese; Mahroo, Omar; Hogg, Ruth; Gieger, Christian; Cougnard-Grégoire, Audrey; Anastasopoulos, Eleftherios; Bron, Alain; Dartigues, Jean-François; Korobelnik, Jean-François; Creuzot-Garcher, Catherine; Topouzis, Fotis; Delcourt, Cécile; Rahi, Jugnoo; Meitinger, Thomas; Fletcher, Astrid; Foster, Paul J; Pfeiffer, Norbert; Klaver, Caroline C W; Hammond, Christopher J

    2015-04-01

    To estimate the prevalence of refractive error in adults across Europe. Refractive data (mean spherical equivalent) collected between 1990 and 2013 from fifteen population-based cohort and cross-sectional studies of the European Eye Epidemiology (E(3)) Consortium were combined in a random effects meta-analysis stratified by 5-year age intervals and gender. Participants were excluded if they were identified as having had cataract surgery, retinal detachment, refractive surgery or other factors that might influence refraction. Estimates of refractive error prevalence were obtained including the following classifications: myopia ≤-0.75 diopters (D), high myopia ≤-6D, hyperopia ≥1D and astigmatism ≥1D. Meta-analysis of refractive error was performed for 61,946 individuals from fifteen studies with median age ranging from 44 to 81 and minimal ethnic variation (98 % European ancestry). The age-standardised prevalences (using the 2010 European Standard Population, limited to those ≥25 and <90 years old) were: myopia 30.6 % [95 % confidence interval (CI) 30.4-30.9], high myopia 2.7 % (95 % CI 2.69-2.73), hyperopia 25.2 % (95 % CI 25.0-25.4) and astigmatism 23.9 % (95 % CI 23.7-24.1). Age-specific estimates revealed a high prevalence of myopia in younger participants [47.2 % (CI 41.8-52.5) in 25-29 years-olds]. Refractive error affects just over a half of European adults. The greatest burden of refractive error is due to myopia, with high prevalence rates in young adults. Using the 2010 European population estimates, we estimate there are 227.2 million people with myopia across Europe.

  2. Evaluation of the 3dMDface system as a tool for soft tissue analysis.

    PubMed

    Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W

    2017-06-01

    To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Medication errors: problems and recommendations from a consensus meeting

    PubMed Central

    Agrawal, Abha; Aronson, Jeffrey K; Britten, Nicky; Ferner, Robin E; de Smet, Peter A; Fialová, Daniela; Fitzgerald, Richard J; Likić, Robert; Maxwell, Simon R; Meyboom, Ronald H; Minuz, Pietro; Onder, Graziano; Schachter, Michael; Velo, Giampaolo

    2009-01-01

    Here we discuss 15 recommendations for reducing the risks of medication errors: Provision of sufficient undergraduate learning opportunities to make medical students safe prescribers. Provision of opportunities for students to practise skills that help to reduce errors. Education of students about common types of medication errors and how to avoid them. Education of prescribers in taking accurate drug histories. Assessment in medical schools of prescribing knowledge and skills and demonstration that newly qualified doctors are safe prescribers. European harmonization of prescribing and safety recommendations and regulatory measures, with regular feedback about rational drug use. Comprehensive assessment of elderly patients for declining function. Exploration of low-dose regimens for elderly patients and preparation of special formulations as required. Training for all health-care professionals in drug use, adverse effects, and medication errors in elderly people. More involvement of pharmacists in clinical practice. Introduction of integrated prescription forms and national implementation in individual countries. Development of better monitoring systems for detecting medication errors, based on classification and analysis of spontaneous reports of previous reactions, and for investigating the possible role of medication errors when patients die. Use of IT systems, when available, to provide methods of avoiding medication errors; standardization, proper evaluation, and certification of clinical information systems. Nonjudgmental communication with patients about their concerns and elicitation of symptoms that they perceive to be adverse drug reactions. Avoidance of defensive reactions if patients mention symptoms resulting from medication errors. PMID:19594525

  4. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  5. Engine dynamic analysis with general nonlinear finite element codes

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1991-01-01

    A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.

  6. Reliable LC-MS quantitative glycomics using iGlycoMab stable isotope labeled glycans as internal standards.

    PubMed

    Zhou, Shiyue; Tello, Nadia; Harvey, Alex; Boyes, Barry; Orlando, Ron; Mechref, Yehia

    2016-06-01

    Glycans have numerous functions in various biological processes and participate in the progress of diseases. Reliable quantitative glycomic profiling techniques could contribute to the understanding of the biological functions of glycans, and lead to the discovery of potential glycan biomarkers for diseases. Although LC-MS is a powerful analytical tool for quantitative glycomics, the variation of ionization efficiency and MS intensity bias are influencing quantitation reliability. Internal standards can be utilized for glycomic quantitation by MS-based methods to reduce variability. In this study, we used stable isotope labeled IgG2b monoclonal antibody, iGlycoMab, as an internal standard to reduce potential for errors and to reduce variabililty due to sample digestion, derivatization, and fluctuation of nanoESI efficiency in the LC-MS analysis of permethylated N-glycans released from model glycoproteins, human blood serum, and breast cancer cell line. We observed an unanticipated degradation of isotope labeled glycans, tracked a source of such degradation, and optimized a sample preparation protocol to minimize degradation of the internal standard glycans. All results indicated the effectiveness of using iGlycoMab to minimize errors originating from sample handling and instruments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2017-10-01

    Observations made by the WISE space telescope and subsequent analysis by the NEOWISE project represent the largest corpus of asteroid data to date, describing the diameter, albedo, and other properties of the ~164,000 asteroids in the collection. I present a critical reanalysis of the WISE observational data, and NEOWISE results published in numerous papers and in the JPL Planetary Data System (PDS). This analysis reveals shortcomings and a lack of clarity, both in the original analysis and in the presentation of results. The procedures used to generate NEOWISE results fall short of established thermal modelling standards. Rather than using a uniform protocol, 10 modelling methods were applied to 12 combinations of WISE band data. Over half the NEOWISE results are based on a single band of data. Most NEOWISE curve fits are poor quality, frequently missing many or all the data points. About 30% of the single-band results miss all the data; 43% of the results derived from the most common multiple-band combinations miss all the data in at least one band. The NEOWISE data processing procedures rely on inconsistent assumptions, and introduce bias by systematically discarding much of the original data. I show that error estimates for the WISE observational data have a true uncertainty factor of ~1.2 to 1.9 times larger than previously described, and that the error estimates do not fit a normal distribution. These issues call into question the validity of the NEOWISE Monte-Carlo error analysis. Comparing published NEOWISE diameters to published estimates using radar, occultation, or spacecraft measurements (ROS) reveals 150 for which the NEOWISE diameters were copied exactly from the ROS source. My findings show that the accuracy of diameter estimates for NEOWISE results depend heavily on the choice of data bands and model. Systematic errors in the diameter estimates are much larger than previously described. Systematic errors for diameters in the PDS range from -3% to +27%. Random errors range from -14% to +19% when using all four WISE bands, and from -45% to +74% in cases using only the W2 band. The results presented here show that much work remains to be done towards understanding asteroid data from WISE/NEOWISE.

  8. Quantitative gel electrophoresis: new records in precision by elaborated staining and detection protocols.

    PubMed

    Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann

    2011-06-01

    Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. The computation of equating errors in international surveys in education.

    PubMed

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  10. Learning Through Experience: Influence of Formal and Informal Training on Medical Error Disclosure Skills in Residents.

    PubMed

    Wong, Brian M; Coffey, Maitreya; Nousiainen, Markku T; Brydges, Ryan; McDonald-Blumer, Heather; Atkinson, Adelle; Levinson, Wendy; Stroud, Lynfa

    2017-02-01

    Residents' attitudes toward error disclosure have improved over time. It is unclear whether this has been accompanied by improvements in disclosure skills. To measure the disclosure skills of internal medicine (IM), paediatrics, and orthopaedic surgery residents, and to explore resident perceptions of formal versus informal training in preparing them for disclosure in real-world practice. We assessed residents' error disclosure skills using a structured role play with a standardized patient in 2012-2013. We compared disclosure skills across programs using analysis of variance. We conducted a multiple linear regression, including data from a historical cohort of IM residents from 2005, to investigate the influence of predictor variables on performance: training program, cohort year, and prior disclosure training and experience. We conducted a qualitative descriptive analysis of data from semistructured interviews with residents to explore resident perceptions of formal versus informal disclosure training. In a comparison of disclosure skills for 49 residents, there was no difference in overall performance across specialties (4.1 to 4.4 of 5, P  = .19). In regression analysis, only the current cohort was significantly associated with skill: current residents performed better than a historical cohort of 42 IM residents ( P  < .001). Qualitative analysis identified the importance of both formal (workshops, morbidity and mortality rounds) and informal (role modeling, debriefing) activities in preparation for disclosure in real-world practice. Residents across specialties have similar skills in disclosure of errors. Residents identified role modeling and a strong local patient safety culture as key facilitators for disclosure.

  11. Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations

    ERIC Educational Resources Information Center

    Moses, Tim; Kim, YoungKoung

    2017-01-01

    The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…

  12. Multiplicative effects model with internal standard in mobile phase for quantitative liquid chromatography-mass spectrometry.

    PubMed

    Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen

    2014-07-01

    Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993

  14. Errors in Bibliographic Citations: A Continuing Problem.

    ERIC Educational Resources Information Center

    Sweetland, James H.

    1989-01-01

    Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…

  15. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  16. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  17. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals.

    PubMed

    Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.

  18. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  19. A crowdsourcing workflow for extracting chemical-induced disease relations from free text

    PubMed Central

    Li, Tong Shu; Bravo, Àlex; Furlong, Laura I.; Good, Benjamin M.; Su, Andrew I.

    2016-01-01

    Relations between chemicals and diseases are one of the most queried biomedical interactions. Although expert manual curation is the standard method for extracting these relations from the literature, it is expensive and impractical to apply to large numbers of documents, and therefore alternative methods are required. We describe here a crowdsourcing workflow for extracting chemical-induced disease relations from free text as part of the BioCreative V Chemical Disease Relation challenge. Five non-expert workers on the CrowdFlower platform were shown each potential chemical-induced disease relation highlighted in the original source text and asked to make binary judgments about whether the text supported the relation. Worker responses were aggregated through voting, and relations receiving four or more votes were predicted as true. On the official evaluation dataset of 500 PubMed abstracts, the crowd attained a 0.505 F-score (0.475 precision, 0.540 recall), with a maximum theoretical recall of 0.751 due to errors with named entity recognition. The total crowdsourcing cost was $1290.67 ($2.58 per abstract) and took a total of 7 h. A qualitative error analysis revealed that 46.66% of sampled errors were due to task limitations and gold standard errors, indicating that performance can still be improved. All code and results are publicly available at https://github.com/SuLab/crowd_cid_relex Database URL: https://github.com/SuLab/crowd_cid_relex PMID:27087308

  20. Sea-Level Trend Uncertainty With Pacific Climatic Variability and Temporally-Correlated Noise

    NASA Astrophysics Data System (ADS)

    Royston, Sam; Watson, Christopher S.; Legrésy, Benoît; King, Matt A.; Church, John A.; Bos, Machiel S.

    2018-03-01

    Recent studies have identified climatic drivers of the east-west see-saw of Pacific Ocean satellite altimetry era sea level trends and a number of sea-level trend and acceleration assessments attempt to account for this. We investigate the effect of Pacific climate variability, together with temporally-correlated noise, on linear trend error estimates and determine new time-of-emergence (ToE) estimates across the Indian and Pacific Oceans. Sea-level trend studies often advocate the use of auto-regressive (AR) noise models to adequately assess formal uncertainties, yet sea level often exhibits colored but non-AR(1) noise. Standard error estimates are over- or under-estimated by an AR(1) model for much of the Indo-Pacific sea level. Allowing for PDO and ENSO variability in the trend estimate only reduces standard errors across the tropics and we find noise characteristics are largely unaffected. Of importance for trend and acceleration detection studies, formal error estimates remain on average up to 1.6 times those from an AR(1) model for long-duration tide gauge data. There is an even chance that the observed trend from the satellite altimetry era exceeds the noise in patches of the tropical Pacific and Indian Oceans and the south-west and north-east Pacific gyres. By including climate indices in the trend analysis, the time it takes for the observed linear sea-level trend to emerge from the noise reduces by up to 2 decades.

  1. Prevalence and associated risk factors of undercorrected refractive errors among people with diabetes in Shanghai.

    PubMed

    Zhu, Mengjun; Tong, Xiaowei; Zhao, Rong; He, Xiangui; Zhao, Huijuan; Zhu, Jianfeng

    2017-11-28

    To investigate the prevalence and risk factors of undercorrected refractive error (URE) among people with diabetes in the Baoshan District of Shanghai, where data for undercorrected refractive error are limited. The study was a population-based survey of 649 persons (aged 60 years or older) with diabetes in Baoshan, Shanghai in 2009. One copy of the questionnaire was completed for each subject. Examinations included a standardized refraction and measurement of presenting and best-corrected visual acuity (BCVA), tonometry, slit lamp biomicroscopy, and fundus photography. The calculated age-standardized prevalence rate of URE was 16.63% (95% confidence interval [CI] 13.76-19.49). For visual impairment subjects (presenting vision worse than 20/40 in the better eye), the prevalence of URE was up to 61.11%, and 75.93% of subjects could achieve visual acuity improvement by at least one line using appropriate spectacles. Under multiple logistic regression analysis, older age, female gender, non-farmer, increasing degree of myopia, lens opacities status, diabetic retinopathy (DR), body mass index (BMI) index lower than normal, and poor glycaemic control were associated with higher URE levels. Wearing distance eyeglasses was a protective factor for URE. The undercorrected refractive error in diabetic adults was high in Shanghai. Health education and regular refractive assessment are needed for diabetic adults. Persons with diabetes should be more aware that poor vision is often correctable, especially for those with risk factors.

  2. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  3. Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.

    PubMed

    Robinson, P J

    1997-11-01

    The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.

  4. Group-type hydrocarbon standards for high-performance liquid chromatographic analysis of middistillate fuels

    NASA Technical Reports Server (NTRS)

    Otterson, D. A.; Seng, G. T.

    1984-01-01

    A new high-performance liquid chromatographic (HPLC) method for group-type analysis of middistillate fuels is described. It uses a refractive index detector and standards that are prepared by reacting a portion of the fuel sample with sulfuric acid. A complete analysis of a middistillate fuel for saturates and aromatics (including the preparation of the standard) requires about 15 min if standards for several fuels are prepared simultaneously. From model fuel studies, the method was found to be accurate to within 0.4 vol% saturates or aromatics, and provides a precision of + or - 0.4 vol%. Olefin determinations require an additional 15 min of analysis time. However, this determination is needed only for those fuels displaying a significant olefin response at 200 nm (obtained routinely during the saturated/aromatics analysis procedure). The olefin determination uses the responses of the olefins and the corresponding saturates, as well as the average value of their refractive index sensitivity ratios (1.1). Studied indicated that, although the relative error in the olefins result could reach 10 percent by using this average sensitivity ratio, it was 5 percent for the fuels used in this study. Olefin concentrations as low as 0.1 vol% have been determined using this method.

  5. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed

    West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  6. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed Central

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  7. Influence of modulation frequency in rubidium cell frequency standards

    NASA Technical Reports Server (NTRS)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  8. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  9. Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.

    PubMed

    Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P

    2016-03-01

    Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  10. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  11. A rocket ozonesonde for geophysical research and satellite intercomparison

    NASA Technical Reports Server (NTRS)

    Hilsenrath, E.; Coley, R. L.; Kirschner, P. T.; Gammill, B.

    1979-01-01

    The in-situ rocketsonde for ozone profile measurements developed and flown for geophysical research and satellite comparison is reviewed. The measurement principle involves the chemiluminescence caused by ambient ozone striking a detector and passive pumping as a means of sampling the atmosphere as the sonde descends through the atmosphere on a parachute. The sonde is flown on a meteorological sounding rocket, and flight data are telemetered via the standard meteorological GMD ground receiving system. The payload operation, sensor performance, and calibration procedures simulating flight conditions are described. An error analysis indicated an absolute accuracy of about 12 percent and a precision of about 8 percent. These are combined to give a measurement error of 14 percent.

  12. Determination of thorium by fluorescent x-ray spectrometry

    USGS Publications Warehouse

    Adler, I.; Axelrod, J.M.

    1955-01-01

    A fluorescent x-ray spectrographic method for the determination of thoria in rock samples uses thallium as an internal standard. Measurements are made with a two-channel spectrometer equipped with quartz (d = 1.817 A.) analyzing crystals. Particle-size effects are minimized by grinding the sample components with a mixture of silicon carbide and aluminum and then briquetting. Analyses of 17 samples showed that for the 16 samples containing over 0.7% thoria the average error, based on chemical results, is 4.7% and the maximum error, 9.5%. Because of limitations of instrumentation, 0.2% thoria is considered the lower limit of detection. An analysis can be made in about an hour.

  13. Nature of nursing errors and their contributing factors in intensive care units.

    PubMed

    Eltaybani, Sameh; Mohamed, Nadia; Abdelwareth, Mona

    2018-04-27

    Errors tend to be multifactorial and so learning from nurses' experiences with them would be a powerful tool toward promoting patient safety. To identify the nature of nursing errors and their contributing factors in intensive care units (ICUs). A semi-structured interview with 112 critical care nurses to elicit the reports about their encountered errors followed by a content analysis. A total of 300 errors were reported. Most of them (94·3%) were classified in more than one error category, e.g. 'lack of intervention', 'lack of attentiveness' and 'documentation errors': these were the most frequently involved error categories. Approximately 40% of reported errors contributed to significant harm or death of the involved patients, with system-related factors being involved in 84·3% of them. More errors occur during the evening shift than the night and morning shifts (42·7% versus 28·7% and 16·7%, respectively). There is a statistically significant relation (p ≤ 0·001) between error disclosure to a nursing supervisor and its impact on the patient. Nurses are more likely to report their errors when they feel safe and when the reporting system is not burdensome, although an internationally standardized language to define and analyse nursing errors is needed. Improving the health care system, particularly the managerial and environmental aspects, might reduce nursing errors in ICUs in terms of their incidence and seriousness. Targeting error-liable times in the ICU, such as mid-evening and mid-night shifts, along with improved supervision and adequate staff reallocation, might tackle the incidence and seriousness of nursing errors. Development of individualized nursing interventions for patients with low health literacy and patients in isolation might create more meaningful dialogue for ICU health care safety. © 2018 British Association of Critical Care Nurses.

  14. Land Surface Temperature Measurements form EOS MODIS Data

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming

    1996-01-01

    We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.

  15. MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis

    NASA Technical Reports Server (NTRS)

    McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.

    2010-01-01

    Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.

  16. MASTER: a model to improve and standardize clinical breakpoints for antimicrobial susceptibility testing using forecast probabilities.

    PubMed

    Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael

    2017-09-01

    The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  18. Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2011-01-01

    The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…

  19. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  20. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  1. Supplemental Figures, Tables, and Standard Error Tables for Student Financing of Undergraduate Education: 2007-08. Sticker, Net, and Out-of-Pocket Prices

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2010

    2010-01-01

    This paper presents the supplemental figures, tables, and standard error tables for the report "Student Financing of Undergraduate Education: 2007-08. Web Tables. NCES 2010-162." (Contains 6 figures and 10 tables.) [For the main report, see ED511828.

  2. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  3. Thoughtflow: Standards and Tools for Provenance Capture and Workflow Definition to Support Model-Informed Drug Discovery and Development.

    PubMed

    Wilkins, J J; Chan, Pls; Chard, J; Smith, G; Smith, M K; Beer, M; Dunn, A; Flandorfer, C; Franklin, C; Gomeni, R; Harnisch, L; Kaye, R; Moodie, S; Sardu, M L; Wang, E; Watson, E; Wolstencroft, K; Cheung, Sya

    2017-05-01

    Pharmacometric analyses are complex and multifactorial. It is essential to check, track, and document the vast amounts of data and metadata that are generated during these analyses (and the relationships between them) in order to comply with regulations, support quality control, auditing, and reporting. It is, however, challenging, tedious, error-prone, and time-consuming, and diverts pharmacometricians from the more useful business of doing science. Automating this process would save time, reduce transcriptional errors, support the retention and transfer of knowledge, encourage good practice, and help ensure that pharmacometric analyses appropriately impact decisions. The ability to document, communicate, and reconstruct a complete pharmacometric analysis using an open standard would have considerable benefits. In this article, the Innovative Medicines Initiative (IMI) Drug Disease Model Resources (DDMoRe) consortium proposes a set of standards to facilitate the capture, storage, and reporting of knowledge (including assumptions and decisions) in the context of model-informed drug discovery and development (MID3), as well as to support reproducibility: "Thoughtflow." A prototype software implementation is provided. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  4. How accurate is accident data in road safety research? An application of vehicle black box data regarding pedestrian-to-taxi accidents in Korea.

    PubMed

    Chung, Younshik; Chang, IlJoon

    2015-11-01

    Recently, the introduction of vehicle black box systems or in-vehicle video event data recorders enables the driver to use the system to collect more accurate crash information such as location, time, and situation at the pre-crash and crash moment, which can be analyzed to find the crash causal factors more accurately. This study presents the vehicle black box system in brief and its application status in Korea. Based on the crash data obtained from the vehicle black box system, this study analyzes the accuracy of the crash data collected from existing road crash data recording method, which has been recorded by police officers based on accident parties' statements or eyewitness's account. The analysis results show that the crash data observed by the existing method have an average of 84.48m of spatial difference and standard deviation of 157.75m as well as average 29.05min of temporal error and standard deviation of 19.24min. Additionally, the average and standard deviation of crash speed errors were found to be 9.03km/h and 7.21km/h, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Error model for the SAO 1969 standard earth.

    NASA Technical Reports Server (NTRS)

    Martin, C. F.; Roy, N. A.

    1972-01-01

    A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.

  6. Applied Computational Electromagnetics Society Journal and Newletter, Volume 14 No. 1

    DTIC Science & Technology

    1999-03-01

    code validation, performance analysis, and input/output standardization; code or technique optimization and error minimization; innovations in...SOUTH AFRICA Alamo, CA, 94507-0516 USA Washington, DC 20330 USA MANAGING EDITOR Kueichien C. Hill Krishna Naishadham Richard W. Adler Wright Laboratory...INSTITUTIONAL MEMBERS ALLGON DERA Nasvagen 17 Common Road, Funtington INNOVATIVE DYNAMICS Akersberga, SWEDEN S-18425 Chichester, P018 9PD UK 2560 N. Triphammer

  7. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.

  8. Radiologic-Pathologic Analysis of Contrast-enhanced and Diffusion-weighted MR Imaging in Patients with HCC after TACE: Diagnostic Accuracy of 3D Quantitative Image Analysis

    PubMed Central

    Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.

    2014-01-01

    Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual standard error [RSEresidual standard error] = 6.38 and 6.33 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively), when compared with non-3Dthree-dimensional techniques (RSEresidual standard error = 12.18 for visual assessment). Conclusion This radiologic-pathologic correlation study demonstrates the diagnostic accuracy of 3Dthree-dimensional quantitative MR imaging techniques in identifying pathologically measured tumor necrosis in HCChepatocellular carcinoma lesions treated with TACEtransarterial chemoembolization. © RSNA, 2014 Online supplemental material is available for this article. PMID:25028783

  9. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  10. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  11. Temporal analysis of the frequency and duration of low and high streamflow: Years of record needed to characterize streamflow variability

    USGS Publications Warehouse

    Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.

    2005-01-01

    A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.

  12. Weights and measures: a new look at bisection behaviour in neglect.

    PubMed

    McIntosh, Robert D; Schindler, Igor; Birchall, Daniel; Milner, A David

    2005-12-01

    Horizontal line bisection is a ubiquitous task in the investigation of visual neglect. Patients with left neglect typically make rightward errors that increase with line length and for lines at more leftward positions. For short lines, or for lines presented in right space, these errors may 'cross over' to become leftward. We have taken a new approach to these phenomena by employing a different set of dependent and independent variables for their description. Rather than recording bisection error, we record the lateral position of the response within the workspace. We have studied how this varies when the locations of the left and right endpoints are manipulated independently. Across 30 patients with left neglect, we have observed a characteristic asymmetry between the 'weightings' accorded to the two endpoints, such that responses are less affected by changes in the location of the left endpoint than by changes in the location of the right. We show that a simple endpoint weightings analysis accounts readily for the effects of line length and spatial position, including cross-over effects, and leads to an index of neglect that is more sensitive than the standard measure. We argue that this novel approach is more parsimonious than the standard model and yields fresh insights into the nature of neglect impairment.

  13. Minimizing Interpolation Bias and Precision Error in In Vivo μCT-based Measurements of Bone Structure and Dynamics

    PubMed Central

    de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry

    2016-01-01

    In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342

  14. Minimizing Interpolation Bias and Precision Error in In Vivo µCT-Based Measurements of Bone Structure and Dynamics.

    PubMed

    de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry

    2016-08-01

    In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.

  15. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  16. Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador

    NASA Astrophysics Data System (ADS)

    Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.

    2017-06-01

    Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.

  17. Quality improvement through implementation of discharge order reconciliation.

    PubMed

    Lu, Yun; Clifford, Pamela; Bjorneby, Andreas; Thompson, Bruce; VanNorman, Samuel; Won, Katie; Larsen, Kevin

    2013-05-01

    A coordinated multidisciplinary process to reduce medication errors related to patient discharges to skilled-nursing facilities (SNFs) is described. After determining that medication errors were a frequent cause of readmission among patients discharged to SNFs, a medical center launched a two-phase quality-improvement project focused on cardiac and medical patients. Phase one of the project entailed a three-month failure modes and effects analysis of existing procedures discharge, followed by the development and pilot testing of a multidisciplinary, closed-loop workflow process involving staff and resident physicians, clinical nurse coordinators, and clinical pharmacists. During pilot testing of the new workflow process, the rate of discharge medication errors involving SNF patients was tracked, and data on medication-related readmissions in a designated intervention group (n = 87) and a control group of patients (n = 1893) discharged to SNFs via standard procedures during a nine-month period were collected, with the data stratified using severity of illness (SOI) classification. Analysis of the collected data indicated a cumulative 30-day medication-related readmission rate for study group patients in the minor, moderate, and major SOI categories of 5.4% (4 of 74 patients), compared with a rate of 9.5% (169 of 1780 patients) in the control group. In phase 2 of the project, the revised SNF discharge medication reconciliation procedure was implemented throughout the hospital; since hospitalwide implementation of the new workflow, the readmission rate for SNF patients has been maintained at about 6.7%. Implementing a standardized discharge order reconciliation process that includes pharmacists led to decreased readmission rates and improved care for patients discharged to SNFs.

  18. Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.

    PubMed

    Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P

    2016-04-15

    We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.

  19. Data-quality measures for stakeholder-implemented watershed-monitoring programs

    USGS Publications Warehouse

    Greve, Adrienne I.

    2002-01-01

    Community-based watershed groups, many of which collect environmental data, have steadily increased in number over the last decade. The data generated by these programs are often underutilized due to uncertainty in the quality of data produced. The incorporation of data-quality measures into stakeholder monitoring programs lends statistical validity to data. Data-quality measures are divided into three steps: quality assurance, quality control, and quality assessment. The quality-assurance step attempts to control sources of error that cannot be directly quantified. This step is part of the design phase of a monitoring program and includes clearly defined, quantifiable objectives, sampling sites that meet the objectives, standardized protocols for sample collection, and standardized laboratory methods. Quality control (QC) is the collection of samples to assess the magnitude of error in a data set due to sampling, processing, transport, and analysis. In order to design a QC sampling program, a series of issues needs to be considered: (1) potential sources of error, (2) the type of QC samples, (3) inference space, (4) the number of QC samples, and (5) the distribution of the QC samples. Quality assessment is the process of evaluating quality-assurance measures and analyzing the QC data in order to interpret the environmental data. Quality assessment has two parts: one that is conducted on an ongoing basis as the monitoring program is running, and one that is conducted during the analysis of environmental data. The discussion of the data-quality measures is followed by an example of their application to a monitoring program in the Big Thompson River watershed of northern Colorado.

  20. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

Top