Sample records for account measurement errors

  1. Measurement Error and Environmental Epidemiology: A Policy Perspective

    PubMed Central

    Edwards, Jessie K.; Keil, Alexander P.

    2017-01-01

    Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941

  2. Accounting for measurement error: a critical but often overlooked process.

    PubMed

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  3. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    PubMed

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  4. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  6. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  7. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  8. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    PubMed

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  9. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  10. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    PubMed

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  11. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    PubMed

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  12. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  13. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    PubMed

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. #2 - An Empirical Assessment of Exposure Measurement Error and Effect Attenuation in Bi-Pollutant Epidemiologic Models

    EPA Science Inventory

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...

  15. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  16. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  17. Accounting for sampling variability, injury under-reporting, and sensor error in concussion injury risk curves.

    PubMed

    Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B

    2015-09-18

    There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Accounting for measurement error in log regression models with applications to accelerated testing.

    PubMed

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  19. Accounting for uncertainty in DNA sequencing data.

    PubMed

    O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J

    2015-02-01

    Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Accounting for independent nondifferential misclassification does not increase certainty that an observed association is in the correct direction.

    PubMed

    Greenland, Sander; Gustafson, Paul

    2006-07-01

    Researchers sometimes argue that their exposure-measurement errors are independent of other errors and are nondifferential with respect to disease, resulting in estimation bias toward the null. Among well-known problems with such arguments are that independence and nondifferentiality are harder to satisfy than ordinarily appreciated (e.g., because of correlation of errors in questionnaire items, and because of uncontrolled covariate effects on error rates); small violations of independence or nondifferentiality may lead to bias away from the null; and, if exposure is polytomous, the bias produced by independent nondifferential error is not always toward the null. The authors add to this list by showing that, in a 2 x 2 table (for which independent nondifferential error produces bias toward the null), accounting for independent nondifferential error does not reduce the p value even though it increases the point estimate. Thus, such accounting should not increase certainty that an association is present.

  1. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  2. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  3. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  4. Using Marginal Structural Measurement-Error Models to Estimate the Long-term Effect of Antiretroviral Therapy on Incident AIDS or Death

    PubMed Central

    Cole, Stephen R.; Jacobson, Lisa P.; Tien, Phyllis C.; Kingsley, Lawrence; Chmiel, Joan S.; Anastos, Kathryn

    2010-01-01

    To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding. PMID:19934191

  5. 10 CFR 74.59 - Quality assurance and accounting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... occurs which has the potential to affect a measurement result or when program data, generated by tests.../receiver differences, inventory differences, and process differences. (4) Utilize the data generated during... difference (SEID) and the standard error of the process differences. Calibration and measurement error data...

  6. 10 CFR 74.59 - Quality assurance and accounting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... occurs which has the potential to affect a measurement result or when program data, generated by tests.../receiver differences, inventory differences, and process differences. (4) Utilize the data generated during... difference (SEID) and the standard error of the process differences. Calibration and measurement error data...

  7. Mathematical Models for Doppler Measurements

    NASA Technical Reports Server (NTRS)

    Lear, William M.

    1987-01-01

    Error analysis increases precision of navigation. Report presents improved mathematical models of analysis of Doppler measurements and measurement errors of spacecraft navigation. To take advantage of potential navigational accuracy of Doppler measurements, precise equations relate measured cycle count to position and velocity. Drifts and random variations in transmitter and receiver oscillator frequencies taken into account. Mathematical models also adapted to aircraft navigation, radar, sonar, lidar, and interferometry.

  8. Animal movement constraints improve resource selection inference in the presence of telemetry error

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.

    2016-01-01

    Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.

  9. Poisoning Safety Fact Sheet (2015)

    MedlinePlus

    ... in emergency departments after getting into a medication, accounting for 68% of medication-related visits for young ... and under (31% of dosing errors), followed by measurement errors (30%). 2 • For every 10 poison exposures ...

  10. The challenges in defining and measuring diagnostic error.

    PubMed

    Zwaan, Laura; Singh, Hardeep

    2015-06-01

    Diagnostic errors have emerged as a serious patient safety problem but they are hard to detect and complex to define. At the research summit of the 2013 Diagnostic Error in Medicine 6th International Conference, we convened a multidisciplinary expert panel to discuss challenges in defining and measuring diagnostic errors in real-world settings. In this paper, we synthesize these discussions and outline key research challenges in operationalizing the definition and measurement of diagnostic error. Some of these challenges include 1) difficulties in determining error when the disease or diagnosis is evolving over time and in different care settings, 2) accounting for a balance between underdiagnosis and overaggressive diagnostic pursuits, and 3) determining disease diagnosis likelihood and severity in hindsight. We also build on these discussions to describe how some of these challenges can be addressed while conducting research on measuring diagnostic error.

  11. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.

    PubMed

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-09-29

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  12. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-01-01

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments. PMID:28961209

  13. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  14. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  15. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  16. Random Measurement Error as a Source of Discrepancies between the Reports of Wives and Husbands Concerning Marital Power and Task Allocation.

    ERIC Educational Resources Information Center

    Quarm, Daisy

    1981-01-01

    Findings for couples (N=119) show wife's work, money, and spare time low between-spouse correlations are due in part to random measurement error. Suggests that increasing reliability of measures by creating multi-item indices can also increase correlations. Car purchase, vacation, and child discipline were not accounted for by random measurement…

  17. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less

  19. Socioeconomic Position Across the Life Course and Cognitive Ability Later in Life: The Importance of Considering Early Cognitive Ability.

    PubMed

    Foverskov, Else; Mortensen, Erik Lykke; Holm, Anders; Pedersen, Jolene Lee Masters; Osler, Merete; Lund, Rikke

    2017-11-01

    Investigate direct and indirect associations between markers of socioeconomic position (SEP) across the life course and midlife cognitive ability while addressing methodological limitations in prior work. Longitudinal data from the Danish Metropolit cohort of men born in 1953 ( N = 2,479) who completed ability tests at age 12, 18, and 56-58 linked to register-based information on paternal occupational class, educational attainment, and occupational level. Associations were assessed using structural equation models, and different models were estimated to examine the importance of accounting for childhood ability and measurement error. Associations between adult SEP measures and midlife ability decreased significantly when adjusting for childhood ability and measurement error. The association between childhood and midlife ability was by far the strongest. The impact of adult SEP on later life ability may be exaggerated when not accounting for the stability of individual differences in cognitive ability and measurement error in test scores.

  20. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  1. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  2. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  3. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  4. Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.

    PubMed

    Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J

    2001-09-01

    In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.

  5. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Comparison of underflight data with satellite estimates of temperature revealed significant gain calibration errors. The source of the LANDSAT 5 band 6 error and its reproducibility is not yet adequately defined. The error can be accounted for using underflight or ground truth data. When underflight data are used to correct the satellite data, the residual error for the scene studied was 1.3K when the predicted temperatures were compared to measured surface temperature.

  6. Analysis and improvement of gas turbine blade temperature measurement error

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  7. Net Weight Issue LLNL DOE-STD-3013 Containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilk, P

    2008-01-16

    The following position paper will describe DOE-STD-3013 container sets No.L000072 and No.L000076, and how they are compliant with DOE-STD-3013-2004. All masses of accountable nuclear materials are measured on LLNL certified balances maintained under an MC&A Program approved by DOE/NNSA LSO. All accountability balances are recalibrated annually and checked to be within calibration on each day that the balance is used for accountability purposes. A statistical analysis of the historical calibration checks from the last seven years indicates that the full-range Limit of Error (LoE, 95% confidence level) for the balance used to measure the mass of the contents of themore » above indicated 3013 containers is 0.185 g. If this error envelope, at the 95% confidence level, were to be used to generate an upper-limit to the measured weight of the containers No.L000072 and No.L000076, the error-envelope would extend beyond the 5.0 kg 3013-standard limit on the package contents by less than 0.3 g. However, this is still well within the intended safety bounds of DOE-STD-3013-2004.« less

  8. Importance of interpolation and coincidence errors in data fusion

    NASA Astrophysics Data System (ADS)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  9. Method and apparatus for correcting eddy current signal voltage for temperature effects

    DOEpatents

    Kustra, Thomas A.; Caffarel, Alfred J.

    1990-01-01

    An apparatus and method for measuring physical characteristics of an electrically conductive material by the use of eddy-current techniques and compensating measurement errors caused by changes in temperature includes a switching arrangement connected between primary and reference coils of an eddy-current probe which allows the probe to be selectively connected between an eddy current output oscilloscope and a digital ohm-meter for measuring the resistances of the primary and reference coils substantially at the time of eddy current measurement. In this way, changes in resistance due to temperature effects can be completely taken into account in determining the true error in the eddy current measurement. The true error can consequently be converted into an equivalent eddy current measurement correction.

  10. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency inmore » all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.« less

  11. GPS measurement error gives rise to spurious 180 degree turning angles and strong directional biases in animal movement data.

    PubMed

    Hurford, Amy

    2009-05-20

    Movement data are frequently collected using Global Positioning System (GPS) receivers, but recorded GPS locations are subject to errors. While past studies have suggested methods to improve location accuracy, mechanistic movement models utilize distributions of turning angles and directional biases and these data present a new challenge in recognizing and reducing the effect of measurement error. I collected locations from a stationary GPS collar, analyzed a probabilistic model and used Monte Carlo simulations to understand how measurement error affects measured turning angles and directional biases. Results from each of the three methods were in complete agreement: measurement error gives rise to a systematic bias where a stationary animal is most likely to be measured as turning 180 degrees or moving towards a fixed point in space. These spurious effects occur in GPS data when the measured distance between locations is <20 meters. Measurement error must be considered as a possible cause of 180 degree turning angles in GPS data. Consequences of failing to account for measurement error are predicting overly tortuous movement, numerous returns to previously visited locations, inaccurately predicting species range, core areas, and the frequency of crossing linear features. By understanding the effect of GPS measurement error, ecologists are able to disregard false signals to more accurately design conservation plans for endangered wildlife.

  12. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  13. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  14. Regression calibration for models with two predictor variables measured with error and their interaction, using instrumental variables and longitudinal data.

    PubMed

    Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan

    2014-02-10

    Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Measurement error in earnings data: Using a mixture model approach to combine survey and register data.

    PubMed

    Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.

  16. A contribution to the calculation of measurement uncertainty and optimization of measuring strategies in coordinate measurement

    NASA Astrophysics Data System (ADS)

    Waeldele, F.

    1983-01-01

    The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.

  17. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  18. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Working with Error and Uncertainty to Increase Measurement Validity

    ERIC Educational Resources Information Center

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  20. Performance-Based Measurement: Action for Organizations and HPT Accountability

    ERIC Educational Resources Information Center

    Larbi-Apau, Josephine A.; Moseley, James L.

    2010-01-01

    Basic measurements and applications of six selected general but critical operational performance-based indicators--effectiveness, efficiency, productivity, profitability, return on investment, and benefit-cost ratio--are presented. With each measurement, goals and potential impact are explored. Errors, risks, limitations to measurements, and a…

  1. MEASURING ECONOMIC GROWTH FROM OUTER SPACE.

    PubMed

    Henderson, J Vernon; Storeygard, Adam; Weil, David N

    2012-04-01

    GDP growth is often measured poorly for countries and rarely measured at all for cities or subnational regions. We propose a readily available proxy: satellite data on lights at night. We develop a statistical framework that uses lights growth to augment existing income growth measures, under the assumption that measurement error in using observed light as an indicator of income is uncorrelated with measurement error in national income accounts. For countries with good national income accounts data, information on growth of lights is of marginal value in estimating the true growth rate of income, while for countries with the worst national income accounts, the optimal estimate of true income growth is a composite with roughly equal weights. Among poor-data countries, our new estimate of average annual growth differs by as much as 3 percentage points from official data. Lights data also allow for measurement of income growth in sub- and supranational regions. As an application, we examine growth in Sub Saharan African regions over the last 17 years. We find that real incomes in non-coastal areas have grown faster by 1/3 of an annual percentage point than coastal areas; non-malarial areas have grown faster than malarial ones by 1/3 to 2/3 annual percent points; and primate city regions have grown no faster than hinterland areas. Such applications point toward a research program in which "empirical growth" need no longer be synonymous with "national income accounts."

  2. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    PubMed Central

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2017-01-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. PMID:22528468

  3. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015

  4. [Analysis of intrusion errors in free recall].

    PubMed

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  5. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data

    PubMed Central

    Nevo, Daniel; Zucker, David M.; Tamimi, Rulla M.; Wang, Molin

    2017-01-01

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps–clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses’ Health Study to demonstrate the utility of our method. PMID:27558651

  6. Void fraction and velocity measurement of simulated bubble in a rotating disc using high frame rate neutron radiography.

    PubMed

    Saito, Y; Mishima, K; Matsubayashi, M

    2004-10-01

    To evaluate measurement error of local void fraction and velocity field in a gas-molten metal two-phase flow by high-frame-rate neutron radiography, experiments using a rotating stainless-steel disc, which has several holes of various diameters and depths simulating gas bubbles, were performed. Measured instantaneous void fraction and velocity field of the simulated bubbles were compared with the calculated values based on the rotating speed, the diameter and the depth of the holes as parameters and the measurement error was evaluated. The rotating speed was varied from 0 to 350 rpm (tangential velocity of the simulated bubbles from 0 to 1.5 m/s). The effect of shutter speed of the imaging system on the measurement error was also investigated. It was revealed from the Lagrangian time-averaged void fraction profile that the measurement error of the instantaneous void fraction depends mainly on the light-decay characteristics of the fluorescent converter. The measurement error of the instantaneous local void fraction of simulated bubbles is estimated to be 20%. In the present imaging system, the light-decay characteristics of the fluorescent converter affect the measurement remarkably, and so should be taken into account in estimating the measurement error of the local void fraction profile.

  7. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.

  8. Estimating the settling velocity of bioclastic sediment using common grain-size analysis techniques

    USGS Publications Warehouse

    Cuttler, Michael V. W.; Lowe, Ryan J.; Falter, James L.; Buscombe, Daniel D.

    2017-01-01

    Most techniques for estimating settling velocities of natural particles have been developed for siliciclastic sediments. Therefore, to understand how these techniques apply to bioclastic environments, measured settling velocities of bioclastic sedimentary deposits sampled from a nearshore fringing reef in Western Australia were compared with settling velocities calculated using results from several common grain-size analysis techniques (sieve, laser diffraction and image analysis) and established models. The effects of sediment density and shape were also examined using a range of density values and three different models of settling velocity. Sediment density was found to have a significant effect on calculated settling velocity, causing a range in normalized root-mean-square error of up to 28%, depending upon settling velocity model and grain-size method. Accounting for particle shape reduced errors in predicted settling velocity by 3% to 6% and removed any velocity-dependent bias, which is particularly important for the fastest settling fractions. When shape was accounted for and measured density was used, normalized root-mean-square errors were 4%, 10% and 18% for laser diffraction, sieve and image analysis, respectively. The results of this study show that established models of settling velocity that account for particle shape can be used to estimate settling velocity of irregularly shaped, sand-sized bioclastic sediments from sieve, laser diffraction, or image analysis-derived measures of grain size with a limited amount of error. Collectively, these findings will allow for grain-size data measured with different methods to be accurately converted to settling velocity for comparison. This will facilitate greater understanding of the hydraulic properties of bioclastic sediment which can help to increase our general knowledge of sediment dynamics in these environments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  10. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data.

    PubMed

    Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-10-13

    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  11. Transfer Alignment Error Compensator Design Based on Robust State Estimation

    NASA Astrophysics Data System (ADS)

    Lyou, Joon; Lim, You-Chol

    This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.

  12. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    PubMed

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  13. Imputing Risk Tolerance From Survey Responses

    PubMed Central

    Kimball, Miles S.; Sahm, Claudia R.; Shapiro, Matthew D.

    2010-01-01

    Economic theory assigns a central role to risk preferences. This article develops a measure of relative risk tolerance using responses to hypothetical income gambles in the Health and Retirement Study. In contrast to most survey measures that produce an ordinal metric, this article shows how to construct a cardinal proxy for the risk tolerance of each survey respondent. The article also shows how to account for measurement error in estimating this proxy and how to obtain consistent regression estimates despite the measurement error. The risk tolerance proxy is shown to explain differences in asset allocation across households. PMID:20407599

  14. MEASURING ECONOMIC GROWTH FROM OUTER SPACE

    PubMed Central

    Henderson, J. Vernon; Storeygard, Adam; Weil, David N.

    2013-01-01

    GDP growth is often measured poorly for countries and rarely measured at all for cities or subnational regions. We propose a readily available proxy: satellite data on lights at night. We develop a statistical framework that uses lights growth to augment existing income growth measures, under the assumption that measurement error in using observed light as an indicator of income is uncorrelated with measurement error in national income accounts. For countries with good national income accounts data, information on growth of lights is of marginal value in estimating the true growth rate of income, while for countries with the worst national income accounts, the optimal estimate of true income growth is a composite with roughly equal weights. Among poor-data countries, our new estimate of average annual growth differs by as much as 3 percentage points from official data. Lights data also allow for measurement of income growth in sub- and supranational regions. As an application, we examine growth in Sub Saharan African regions over the last 17 years. We find that real incomes in non-coastal areas have grown faster by 1/3 of an annual percentage point than coastal areas; non-malarial areas have grown faster than malarial ones by 1/3 to 2/3 annual percent points; and primate city regions have grown no faster than hinterland areas. Such applications point toward a research program in which “empirical growth” need no longer be synonymous with “national income accounts.” PMID:25067841

  15. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  16. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    PubMed

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials

    PubMed Central

    Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda

    2016-01-01

    In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797

  18. Carbon dioxide emission tallies for 210 U.S. coal-fired power plants: a comparison of two accounting methods.

    PubMed

    Quick, Jeffrey C

    2014-01-01

    Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.

  19. Errors in quantitative backscattered electron analysis of bone standardized by energy-dispersive x-ray spectrometry.

    PubMed

    Vajda, E G; Skedros, J G; Bloebaum, R D

    1998-10-01

    Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.

  20. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account. Within...

  1. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account. Within...

  2. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  3. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE PAGES

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.; ...

    2017-06-26

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  4. Growth models and the expected distribution of fluctuating asymmetry

    USGS Publications Warehouse

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  5. Contribution of long-term accounting for raindrop size distribution variations on quantitative precipitation estimation by weather radar: Disdrometers vs parameter optimization

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.

    2015-12-01

    Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However, the resulting optimal Z-R and Z-k relations are considerably different from those obtained from disdrometer observations. As such, the best microphysical parameter set results in a minimization of the overall bias, which besides accounting for DSD variations also corrects for the impact of additional error sources.

  6. Influence of non-ideal performance of lasers on displacement precision in single-grating heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Xie, Xuedong; Yan, Shuhua

    2010-10-01

    Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .

  7. Determining the refractive index and thickness of thin films from prism coupler measurements

    NASA Technical Reports Server (NTRS)

    Kirsch, S. T.

    1981-01-01

    A simple method of determining thin film parameters from mode indices measured using a prism coupler is described. The problem is reduced to doing two least squares straight line fits through measured mode indices vs effective mode number. The slope and y intercept of the line are simply related to the thickness and refractive index of film, respectively. The approach takes into account the correlation between as well as the uncertainty in the individual measurements from all sources of error to give precise error tolerances on the best fit values. Due to the precision of the tolerances, anisotropic films can be identified and characterized.

  8. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  9. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  10. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  11. Psychophysical measurements in children: challenges, pitfalls, and considerations.

    PubMed

    Witton, Caroline; Talcott, Joel B; Henning, G Bruce

    2017-01-01

    Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.

  12. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  13. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    PubMed

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  15. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    PubMed Central

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  16. [Measuring the effect of eyeglasses on determination of squint angle with Purkinje reflexes and the prism cover test].

    PubMed

    Barry, J C; Backes, A

    1998-04-01

    The alternating prism and cover test is the conventional test for the measurement of the angle of strabismus. The error induced by the prismatic effect of glasses is typically about 27-30%/10 D. Alternatively, the angle of strabismus can be measured with methods based on Purkinje reflex positions. This study examines the differences between three such options, taking into account the influence of glasses. The studied system comprised the eyes with or without glasses, a fixation object and a device for recording the eye position: in the case of the alternate prism and cover test, a prism bar was required; in the case of a Purkinje reflex based device, light sources for generation of reflexes and a camera for the documentation of the reflex positions were used. Measurements performed on model eyes and computer ray traces were used to analyze and compare the options. When a single corneal reflex is used, the misalignment of the corneal axis can be measured; the error in this measurement due to the prismatic effect of glasses was 7.6%/10 D, the smallest found in this study. The individual Hirschberg ratio can be determined by monocular measurements in three gaze directions. The angle of strabismus can be measured with Purkinje reflex based methods if the fundamental differences between these methods and the alternate prism and cover test, and if the influence of glasses and other sources of error are accounted for.

  17. Measurement of solar radius changes

    NASA Technical Reports Server (NTRS)

    Labonte, B. J.; Howard, R.

    1981-01-01

    Results of daily photometric measurements of the solar radius from Mt. Wilson over the past seven years are reported. Reduction of the full disk magnetograms yields a formal error of 0.1 arcsec in the boustrophedonic scans in the 5250.2 A FeI line. 150 scan lines comprise each observation; 1,412 observations were made from 1974-1981. Measurement procedures, determination of the scattered light of the optics and the atmosphere, and error calculations are described, noting that days of poor atmospheric visibility are omitted from the data. The horizontal diameter of the sun remains visually fixed while the vertical component changes due to atmospheric diffraction; error accounting for thermal effects, telescope aberrations, and instrument calibration are discussed, and results, within instrument accuracy, indicate no change in the solar radius over the last seven years.

  18. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep

    PubMed Central

    Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.

    2009-01-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925

  19. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  20. Uncertainty evaluation in normalization of isotope delta measurement results against international reference materials.

    PubMed

    Meija, Juris; Chartrand, Michelle M G

    2018-01-01

    Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.

  1. Statistical design and analysis for plant cover studies with multiple sources of observation errors

    USGS Publications Warehouse

    Wright, Wilson; Irvine, Kathryn M.; Warren, Jeffrey M .; Barnett, Jenny K.

    2017-01-01

    Effective wildlife habitat management and conservation requires understanding the factors influencing distribution and abundance of plant species. Field studies, however, have documented observation errors in visually estimated plant cover including measurements which differ from the true value (measurement error) and not observing a species that is present within a plot (detection error). Unlike the rapid expansion of occupancy and N-mixture models for analysing wildlife surveys, development of statistical models accounting for observation error in plants has not progressed quickly. Our work informs development of a monitoring protocol for managed wetlands within the National Wildlife Refuge System.Zero-augmented beta (ZAB) regression is the most suitable method for analysing areal plant cover recorded as a continuous proportion but assumes no observation errors. We present a model extension that explicitly includes the observation process thereby accounting for both measurement and detection errors. Using simulations, we compare our approach to a ZAB regression that ignores observation errors (naïve model) and an “ad hoc” approach using a composite of multiple observations per plot within the naïve model. We explore how sample size and within-season revisit design affect the ability to detect a change in mean plant cover between 2 years using our model.Explicitly modelling the observation process within our framework produced unbiased estimates and nominal coverage of model parameters. The naïve and “ad hoc” approaches resulted in underestimation of occurrence and overestimation of mean cover. The degree of bias was primarily driven by imperfect detection and its relationship with cover within a plot. Conversely, measurement error had minimal impacts on inferences. We found >30 plots with at least three within-season revisits achieved reasonable posterior probabilities for assessing change in mean plant cover.For rapid adoption and application, code for Bayesian estimation of our single-species ZAB with errors model is included. Practitioners utilizing our R-based simulation code can explore trade-offs among different survey efforts and parameter values, as we did, but tuned to their own investigation. Less abundant plant species of high ecological interest may warrant the additional cost of gathering multiple independent observations in order to guard against erroneous conclusions.

  2. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  3. 10 CFR 75.23 - Operating records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Accounting and Control for Facilities § 75.23 Operating records. The operating records required by § 75.21... to control the quality of measurements, and the derived estimates of random and systematic error; (c...

  4. Seeing the conflict: an attentional account of reasoning errors.

    PubMed

    Mata, André; Ferreira, Mário B; Voss, Andreas; Kollei, Tanja

    2017-12-01

    In judgment and reasoning, intuition and deliberation can agree on the same responses, or they can be in conflict and suggest different responses. Incorrect responses to conflict problems have traditionally been interpreted as a sign of faulty problem-solving-an inability to solve the conflict. However, such errors might emerge earlier, from insufficient attention to the conflict. To test this attentional hypothesis, we manipulated the conflict in reasoning problems and used eye-tracking to measure attention. Across several measures, correct responders paid more attention than incorrect responders to conflict problems, and they discriminated between conflict and no-conflict problems better than incorrect responders. These results are consistent with a two-stage account of reasoning, whereby sound problem solving in the second stage can only lead to accurate responses when sufficient attention is paid in the first stage.

  5. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  6. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  7. Calibration system for radon EEC measurements.

    PubMed

    Mostafa, Y A M; Vasyanovich, M; Zhukovsky, M; Zaitceva, N

    2015-06-01

    The measurement of radon equivalent equilibrium concentration (EECRn) is very simple and quick technique for the estimation of radon progeny level in dwellings or working places. The most typical methods of EECRn measurements are alpha radiometry or alpha spectrometry. In such technique, the influence of alpha particle absorption in filters and filter effectiveness should be taken into account. In the authors' work, it is demonstrated that more precise and less complicated calibration of EECRn-measuring equipment can be conducted by the use of the gamma spectrometer as a reference measuring device. It was demonstrated that for this calibration technique systematic error does not exceed 3 %. The random error of (214)Bi activity measurements is in the range 3-6 %. In general, both these errors can be decreased. The measurements of EECRn by gamma spectrometry and improved alpha radiometry are in good agreement, but the systematic shift between average values can be observed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Accounting for optical errors in microtensiometry.

    PubMed

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Quotation accuracy in medical journal articles-a systematic review and meta-analysis.

    PubMed

    Jergas, Hannah; Baethge, Christopher

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.

  10. Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.

    PubMed

    Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico

    2017-03-01

    Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.

  11. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  12. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  13. Comparing and Combining Data across Multiple Sources via Integration of Paired-sample Data to Correct for Measurement Error

    PubMed Central

    Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve

    2014-01-01

    Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070

  14. Individual differences in error monitoring in healthy adults: psychological symptoms and antisocial personality characteristics.

    PubMed

    Chang, Wen-Pin; Davies, Patricia L; Gavin, William J

    2010-10-01

    Recent studies have investigated the relationship between psychological symptoms and personality traits and error monitoring measured by error-related negativity (ERN) and error positivity (Pe) event-related potential (ERP) components, yet there remains a paucity of studies examining the collective simultaneous effects of psychological symptoms and personality traits on error monitoring. This present study, therefore, examined whether measures of hyperactivity-impulsivity, depression, anxiety and antisocial personality characteristics could collectively account for significant interindividual variability of both ERN and Pe amplitudes, in 29 healthy adults with no known disorders, ages 18-30 years. The bivariate zero-order correlation analyses found that only the anxiety measure was significantly related to both ERN and Pe amplitudes. However, multiple regression analyses that included all four characteristic measures while controlling for number of segments in the ERP average revealed that both depression and antisocial personality characteristics were significant predictors for the ERN amplitudes whereas antisocial personality was the only significant predictor for the Pe amplitude. These findings suggest that psychological symptoms and personality traits are associated with individual variations in error monitoring in healthy adults, and future studies should consider these variables when comparing group difference in error monitoring between adults with and without disabilities. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  15. Who gets a mammogram amongst European women aged 50-69 years?

    PubMed Central

    2012-01-01

    On the basis of the Survey of Health, Ageing, and Retirement (SHARE), we analyse the determinants of who engages in mammography screening focusing on European women aged 50-69 years. A special emphasis is put on the measurement error of subjective life expectancy and on the measurement and impact of physician quality. Our main findings are that physician quality, better education, having a partner, younger age and better health are associated with higher rates of receipt. The impact of subjective life expectancy on screening decision substantially increases after taking measurement error into account. JEL Classification C 36, I 11, I 18 PMID:22828268

  16. Novel wave intensity analysis of arterial pulse wave propagation accounting for peripheral reflections

    PubMed Central

    Alastruey, Jordi; Hunt, Anthony A E; Weinberg, Peter D

    2014-01-01

    We present a novel analysis of arterial pulse wave propagation that combines traditional wave intensity analysis with identification of Windkessel pressures to account for the effect on the pressure waveform of peripheral wave reflections. Using haemodynamic data measured in vivo in the rabbit or generated numerically in models of human compliant vessels, we show that traditional wave intensity analysis identifies the timing, direction and magnitude of the predominant waves that shape aortic pressure and flow waveforms in systole, but fails to identify the effect of peripheral reflections. These reflections persist for several cardiac cycles and make up most of the pressure waveform, especially in diastole and early systole. Ignoring peripheral reflections leads to an erroneous indication of a reflection-free period in early systole and additional error in the estimates of (i) pulse wave velocity at the ascending aorta given by the PU–loop method (9.5% error) and (ii) transit time to a dominant reflection site calculated from the wave intensity profile (27% error). These errors decreased to 1.3% and 10%, respectively, when accounting for peripheral reflections. Using our new analysis, we investigate the effect of vessel compliance and peripheral resistance on wave intensity, peripheral reflections and reflections originating in previous cardiac cycles. PMID:24132888

  17. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  18. A numerical procedure for recovering true scattering coefficients from measurements with wide-beam antennas

    NASA Technical Reports Server (NTRS)

    Wang, Qinglin; Gogineni, S. P.

    1991-01-01

    A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.

  19. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  1. Accounting for dropout bias using mixed-effects models.

    PubMed

    Mallinckrodt, C H; Clark, W S; David, S R

    2001-01-01

    Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.

  2. A New Correction Technique for Strain-Gage Measurements Acquired in Transient-Temperature Environments

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1996-01-01

    Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.

  3. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    NASA Technical Reports Server (NTRS)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  4. Quantifying Adventitious Error in a Covariance Structure as a Random Effect

    PubMed Central

    Wu, Hao; Browne, Michael W.

    2017-01-01

    We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463

  5. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  6. A Comparison of Four Approaches to Account for Method Effects in Latent State-Trait Analyses

    ERIC Educational Resources Information Center

    Geiser, Christian; Lockhart, Ginger

    2012-01-01

    Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…

  7. Using Student Test Scores to Measure Teacher Performance: Some Problems in the Design and Implementation of Evaluation Systems

    ERIC Educational Resources Information Center

    Ballou, Dale; Springer, Matthew G.

    2015-01-01

    Our aim in this article is to draw attention to some underappreciated problems in the design and implementation of evaluation systems that incorporate value-added measures. We focus on four: (1) taking into account measurement error in teacher assessments, (2) revising teachers' scores as more information becomes available about their students,…

  8. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  9. Quotation accuracy in medical journal articles—a systematic review and meta-analysis

    PubMed Central

    Jergas, Hannah

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420

  10. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-07-17

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.

  11. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  12. Analytical N beam position monitor method

    NASA Astrophysics Data System (ADS)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  13. 40 CFR 60.4156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking System...

  14. Everyday action in schizophrenia: performance patterns and underlying cognitive mechanisms.

    PubMed

    Kessler, Rachel K; Giovannetti, Tania; MacMullen, Laura R

    2007-07-01

    Everyday action is impaired among individuals with schizophrenia, yet few studies have characterized the nature of this deficit using performance-based measures. This study examined the performance of 20 individuals with schizophrenia or schizoaffective disorder on the Naturalistic Action Test (M. F. Schwartz, L. J. Buxbaum, M. Ferraro, T. Veramonti, & M. Segal, 2003). Performance was coded to examine overall impairment, task accomplishment, and error patterns and was compared with that of healthy controls (n = 28) and individuals with mild dementia (n = 23). Additionally, 2 competing accounts of everyday action deficits, the resource theory and an executive account, were evaluated. When compared with controls, the participants with schizophrenia demonstrated impaired performance. Relative to dementia patients, participants with schizophrenia obtained higher accomplishment scores but committed comparable rates of errors. Moreover, distributions of error types for the 2 groups differed, with the participants with schizophrenia demonstrating greater proportions of errors associated with executive dysfunction. This is the 1st study to show different Naturalistic Action Test performance patterns between 2 neurologically impaired populations. The distinct performance pattern demonstrated by individuals with schizophrenia reflects specific deficits in executive function.

  15. Coil motion effects in watt balances: a theoretical check

    NASA Astrophysics Data System (ADS)

    Li, Shisong; Schlamminger, Stephan; Haddad, Darine; Seifert, Frank; Chao, Leon; Pratt, Jon R.

    2016-04-01

    A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.

  16. Dynamic gas temperature measurement system

    NASA Technical Reports Server (NTRS)

    Elmore, D. L.; Robinson, W. W.; Watkins, W. B.

    1983-01-01

    A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.

  17. Prediction Errors but Not Sharpened Signals Simulate Multivoxel fMRI Patterns during Speech Perception

    PubMed Central

    Davis, Matthew H.

    2016-01-01

    Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209

  18. Characterizing error distributions for MISR and MODIS optical depth data

    NASA Astrophysics Data System (ADS)

    Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.

    2008-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.

  19. Monte Carlo simulations of the impact of troposphere, clock and measurement errors on the repeatability of VLBI positions

    NASA Astrophysics Data System (ADS)

    Pany, A.; Böhm, J.; MacMillan, D.; Schuh, H.; Nilsson, T.; Wresnik, J.

    2011-01-01

    Within the International VLBI Service for Geodesy and Astrometry (IVS) Monte Carlo simulations have been carried out to design the next generation VLBI system ("VLBI2010"). Simulated VLBI observables were generated taking into account the three most important stochastic error sources in VLBI, i.e. wet troposphere delay, station clock, and measurement error. Based on realistic physical properties of the troposphere and clocks we ran simulations to investigate the influence of the troposphere on VLBI analyses, and to gain information about the role of clock performance and measurement errors of the receiving system in the process of reaching VLBI2010's goal of mm position accuracy on a global scale. Our simulations confirm that the wet troposphere delay is the most important of these three error sources. We did not observe significant improvement of geodetic parameters if the clocks were simulated with an Allan standard deviation better than 1 × 10-14 at 50 min and found the impact of measurement errors to be relatively small compared with the impact of the troposphere. Along with simulations to test different network sizes, scheduling strategies, and antenna slew rates these studies were used as a basis for the definition and specification of VLBI2010 antennas and recording system and might also be an example for other space geodetic techniques.

  20. In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, J.E.

    A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains on internal 'U-tube' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds.IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95% confidence levelmore » were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory.Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM.Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less

  1. In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KLEIN, JAMES

    A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains an internal ''U-tube'' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds. IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95 percentmore » confidence level were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory. Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM. Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less

  2. SMOS: a satellite mission to measure ocean surface salinity

    NASA Astrophysics Data System (ADS)

    Font, Jordi; Kerr, Yann H.; Srokosz, Meric A.; Etcheto, Jacqueline; Lagerloef, Gary S.; Camps, Adriano; Waldteufel, Philippe

    2001-01-01

    The ESA's SMOS (Soil Moisture and Ocean Salinity) Earth Explorer Opportunity Mission will be launched by 2005. Its baseline payload is a microwave L-band (21 cm, 1.4 GHz) 2D interferometric radiometer, Y shaped, with three arms 4.5 m long. This frequency allows the measurement of brightness temperature (Tb) under the best conditions to retrieve soil moisture and sea surface salinity (SSS). Unlike other oceanographic variables, until now it has not been possible to measure salinity from space. However, large ocean areas lack significant salinity measurements. The 2D interferometer will measure Tb at large and different incidence angles, for two polarizations. It is possible to obtain SSS from L-band passive microwave measurements if the other factors influencing Tb (SST, surface roughness, foam, sun glint, rain, ionospheric effects and galactic/cosmic background radiation) can be accounted for. Since the radiometric sensitivity is low, SSS cannot be recovered to the required accuracy from a single measurement as the error is about 1-2 psu. If the errors contributing to the uncertainty in Tb are random, averaging the independent data and views along the track, and considering a 200 km square, allow the error to be reduced to 0.1-0.2 pus, assuming all ancillary errors are budgeted.

  3. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  4. Infrared Retrievals of Ice Cloud Properties and Uncertainties with an Optimal Estimation Retrieval Method

    NASA Astrophysics Data System (ADS)

    Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.

    2014-12-01

    We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.

  5. 76 FR 79122 - Magnuson-Stevens Act Provisions; Fisheries Off West Coast States; Pacific Coast Groundfish...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-21

    ... management measures for the remainder of the biennial period that would take into account new knowledge... precautionary, in response to the discovery of an error in the methods that were used to estimate landings of...

  6. Results of the first complete static calibration of the RSRA rotor-load-measurement system

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    1984-01-01

    The compound Rotor System Research Aircraft (RSRA) is designed to make high-accuracy, simultaneous measurements of all rotor forces and moments in flight. Physical calibration of the rotor force- and moment-measurement system when installed in the aircraft is required to account for known errors and to ensure that measurement-system accuracy is traceable to the National Bureau of Standards. The first static calibration and associated analysis have been completed with good results. Hysteresis was a potential cause of static calibration errors, but was found to be negligible in flight compared to full-scale loads, and analytical methods have been devised to eliminate hysteresis effects on calibration data. Flight tests confirmed that the calibrated rotor-load-measurement system performs as expected in flight and that it can dependably make direct measurements of fuselage vertical drag in hover.

  7. Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  8. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.

  9. Error mapping of high-speed AFM systems

    NASA Astrophysics Data System (ADS)

    Klapetek, Petr; Picco, Loren; Payton, Oliver; Yacoot, Andrew; Miles, Mervyn

    2013-02-01

    In recent years, there have been several advances in the development of high-speed atomic force microscopes (HSAFMs) to obtain images with nanometre vertical and lateral resolution at frame rates in excess of 1 fps. To date, these instruments are lacking in metrology for their lateral scan axes; however, by imaging a series of two-dimensional lateral calibration standards, it has been possible to obtain information about the errors associated with these HSAFM scan axes. Results from initial measurements are presented in this paper and show that the scan speed needs to be taken into account when performing a calibration as it can lead to positioning errors of up to 3%.

  10. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  11. The effect of the electronic transmission of prescriptions on dispensing errors and prescription enhancements made in English community pharmacies: a naturalistic stepped wedge study

    PubMed Central

    Franklin, Bryony Dean; Reynolds, Matthew; Sadler, Stacey; Hibberd, Ralph; Avery, Anthony J; Armstrong, Sarah J; Mehta, Rajnikant; Boyd, Matthew J; Barber, Nick

    2014-01-01

    Objectives To compare prevalence and types of dispensing errors and pharmacists’ labelling enhancements, for prescriptions transmitted electronically versus paper prescriptions. Design Naturalistic stepped wedge study. Setting 15 English community pharmacies. Intervention Electronic transmission of prescriptions between prescriber and pharmacy. Main outcome measures Prevalence of labelling errors, content errors and labelling enhancements (beneficial additions to the instructions), as identified by researchers visiting each pharmacy. Results Overall, we identified labelling errors in 5.4% of 16 357 dispensed items, and content errors in 1.4%; enhancements were made for 13.6%. Pharmacists also edited the label for a further 21.9% of electronically transmitted items. Electronically transmitted prescriptions had a higher prevalence of labelling errors (7.4% of 3733 items) than other prescriptions (4.8% of 12 624); OR 1.46 (95% CI 1.21 to 1.76). There was no difference for content errors or enhancements. The increase in labelling errors was mainly accounted for by errors (mainly at one pharmacy) involving omission of the indication, where specified by the prescriber, from the label. A sensitivity analysis in which these cases (n=158) were not considered errors revealed no remaining difference between prescription types. Conclusions We identified a higher prevalence of labelling errors for items transmitted electronically, but this was predominantly accounted for by local practice in a single pharmacy, independent of prescription type. Community pharmacists made labelling enhancements to about one in seven dispensed items, whether electronically transmitted or not. Community pharmacists, prescribers, professional bodies and software providers should work together to agree how items should be dispensed and labelled to best reap the benefits of electronically transmitted prescriptions. Community pharmacists need to ensure their computer systems are promptly updated to help reduce errors. PMID:24742778

  12. Improvement of VLBI EOP Accuracy and Precision

    NASA Technical Reports Server (NTRS)

    MacMillan, Daniel; Ma, Chopo

    2000-01-01

    In the CORE program, EOP measurements will be made with several different networks, each operating on a different day. It is essential that systematic differences between EOP derived by the different networks be minimized. Observed biases between the simultaneous CORE-A and NEOS-A sessions are about 60-130 micro(as) for PM, UT1 and nutation parameters. After removing biases, the observed rms differences are consistent with an increase in the formal precision of the measurements by factors ranging from 1.05 to 1.4. We discuss the possible sources of unmodeled error that account for these factors and the biases and the sensitivities of the network differences to modeling errors. We also discuss differences between VLBI and GPS PM measurements.

  13. A Bayesian framework for infrasound location

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.

    2010-04-01

    We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.

  14. Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection

    NASA Astrophysics Data System (ADS)

    Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.

    2014-02-01

    Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.

  15. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  16. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models.

    PubMed

    Hoffmann, Sabine; Laurier, Dominique; Rage, Estelle; Guihenneuc, Chantal; Ancelet, Sophie

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies.

  17. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    PubMed Central

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  18. Intertester agreement in refractive error measurements.

    PubMed

    Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang

    2013-10-01

    To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.

  19. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    PubMed

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. A comparison of manual and controlled-force attachment-level measurements.

    PubMed

    Reddy, M S; Palcanis, K G; Geurs, N C

    1997-12-01

    This study compared the intra-examiner and inter-examiner error of 2 constant force probes to the reading of a conventional manual probe. 3 examiners made repeated examinations of attachment level using a modified Florida probe and a manual North Carolina probe (read to 1 mm or 0.5 mm); relative attachment level measurements were made using a Florida disk probe. One probe was used in each quadrant in 8 subjects with moderate to advanced periodontitis. Error was calculated as the mean of the absolute value of the difference between each examination, and the correlation between values at each examination calculated. Statistically-significant differences between probe type, examiners, and sites were detected using a repeated measures ANOVA accounting for the nesting within subjects. There was a significant difference in error by probe type (modified Florida probe 0.62 +/- 0.03 mm, r = 0.86; Florida stent probe 0.55 +/- 0.05 mm, r = 0.82; manual probe to 1 mm 0.39 +/- 0.02 mm, r = 0.88; manual probe to 0.5 mm 0.40 +/- 0.02 mm, r = 0.89; (p < 0.001). Significant differences were observed by examiners (p < 0.01). These data indicate that both manual and controlled-force probes can provide measurement within less than 1 mm of error; however, individual calibration of examiners remains important in the reduction of error.

  1. Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1989-01-01

    Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.

  2. Effects of sea maturity on satellite altimeter measurements

    NASA Technical Reports Server (NTRS)

    Glazman, Roman E.; Pilorz, Stuart H.

    1990-01-01

    For equilibrium and near-equilibrium sea states, the wave slope variance is a function of wind speed U and of the sea maturity. The influence of both factors on the altimeter measurements of wind speed, wave height, and radar cross section is studied experimentally on the basis of 1 year's worth of Geosat altimeter observations colocated with in situ wind and wave measurements by 20 NOAA buoys. Errors and biases in altimeter wind speed and wave height measurements are investigted. A geophysically significant error trend correlated with the sea maturity is found in wind-speed measurements. This trend is explained by examining the effect of the generalized wind fetch on the curves of the observed dependence. It is concluded that unambiguous measurements of wind speed by altimeter, in a wide range of sea states, are impossible without accounting for the actual degree of wave development.

  3. Temperature corrections in routine spirometry.

    PubMed Central

    Cramer, D; Peacock, A; Denison, D

    1984-01-01

    Forced expiratory volume (FEV1) and forced vital capacity (FVC) were measured in nine normal subjects with three Vitalograph and three rolling seal spirometers at three different ambient temperatures (4 degrees C, 22 degrees C, 32 degrees C). When the results obtained with the rolling seal spirometer were converted to BTPS the agreement between measurements in the three environments improved, but when the Vitalograph measurements obtained in the hot and cold rooms were converted an error of up to 13% was introduced. The error was similar whether ambient or spirometer temperatures were used to make the conversion. In an attempt to explain the behaviour of the Vitalograph spirometers the compliance of their bellows was measured at the three temperatures. It was higher at the higher temperature (32 degrees C) and lower at the lower temperature (4 degrees C) than at the normal room temperature. These changes in instrument compliance could account for the differences in measured values between the two types of spirometer. It is concluded that the ATPS-BTPS conversion is valid and necessary for measurements made with rolling seal spirometers, but can cause substantial error if it is used for Vitalograph measurements made under conditions other than normal room temperature. PMID:6495245

  4. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y; Fullerton, G; Goins, B

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less

  6. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  7. Evaluation of in-vivo measurement errors associated with micro-computed tomography scans by means of the bone surface distance approach.

    PubMed

    Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco

    2015-11-01

    In vivo micro-computed tomography (µCT) scanning is an important tool for longitudinal monitoring of the bone adaptation process in animal models. However, the errors associated with the usage of in vivo µCT measurements for the evaluation of bone adaptations remain unclear. The aim of this study was to evaluate the measurement errors using the bone surface distance approach. The right tibiae of eight 14-week-old C57BL/6 J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size (10.4 µm) and the tibiae were repositioned between each scan. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration and a region of interest was selected in the proximal tibia metaphysis for analysis. The bone surface distances between the repeated and the baseline scan datasets were evaluated. It was found that the average (±standard deviation) median and 95th percentile bone surface distances were 3.10 ± 0.76 µm and 9.58 ± 1.70 µm, respectively. This study indicated that there were inevitable errors associated with the in vivo µCT measurements of bone microarchitecture and these errors should be taken into account for a better interpretation of bone adaptations measured with in vivo µCT. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    PubMed

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  9. Alternative Methods of Accounting for Underreporting and Overreporting When Measuring Dietary Intake-Obesity Relations

    PubMed Central

    Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A

    2011-01-01

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302

  10. Alternative methods of accounting for underreporting and overreporting when measuring dietary intake-obesity relations.

    PubMed

    Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A

    2011-02-15

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.

  11. Ordinary least squares regression is indicated for studies of allometry.

    PubMed

    Kilmer, J T; Rodríguez, R L

    2017-01-01

    When it comes to fitting simple allometric slopes through measurement data, evolutionary biologists have been torn between regression methods. On the one hand, there is the ordinary least squares (OLS) regression, which is commonly used across many disciplines of biology to fit lines through data, but which has a reputation for underestimating slopes when measurement error is present. On the other hand, there is the reduced major axis (RMA) regression, which is often recommended as a substitute for OLS regression in studies of allometry, but which has several weaknesses of its own. Here, we review statistical theory as it applies to evolutionary biology and studies of allometry. We point out that the concerns that arise from measurement error for OLS regression are small and straightforward to deal with, whereas RMA has several key properties that make it unfit for use in the field of allometry. The recommended approach for researchers interested in allometry is to use OLS regression on measurements taken with low (but realistically achievable) measurement error. If measurement error is unavoidable and relatively large, it is preferable to correct for slope attenuation rather than to turn to RMA regression, or to take the expected amount of attenuation into account when interpreting the data. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  12. Sun Glint and Sea Surface Salinity Remote Sensing

    NASA Technical Reports Server (NTRS)

    Dinnat, Emmanuel P.; LeVine, David M.

    2007-01-01

    A new mission in space, called Aquarius/SAC-D, is being built to measure the salinity of the world's oceans. Salinity is an important parameter for understanding movement of the ocean water. This circulation results in the transportation of heat and is important for understanding climate and climate change. Measuring salinity from space requires precise instruments and a careful accounting for potential sources of error. One of these sources of error is radiation from the sun that is reflected from the ocean surface to the sensor in space. This paper examines this reflected radiation and presents an advanced model for describing this effect that includes the effects of ocean waves on the reflection.

  13. Guidance and navigation for rendezvous with an uncooperative target

    NASA Astrophysics Data System (ADS)

    Telaar, J.; Schlaile, C.; Sommer, J.

    2018-06-01

    This paper presents a guidance strategy for a rendezvous with an uncooperative target. In the applied design reference mission, a spiral approach is commanded ensuring a collision-free relative orbit due to e/i-vector separation. The dimensions of the relative orbit are successively reduced by Δv commands which at the same time improve the observability of the relative state. The navigation is based on line-of-sight measurements. The relative state is estimated by an extended Kalman filter (EKF). The performance of this guidance and navigation strategy is demonstrated by extensive Monte Carlo simulations taking into account all major uncertainties like measurement errors, Δv execution errors, and differential drag.

  14. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    PubMed

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  15. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets

    PubMed Central

    Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586

  16. The Longitudinal Association between Oppositional and Depressive Symptoms across Childhood

    ERIC Educational Resources Information Center

    Boylan, Khrista; Georgiades, Katholiki; Szatmari, Peter

    2010-01-01

    Objective: Symptoms of oppositional defiant disorder (ODD) and depression show high rates of co-occurrence, both cross-sectionally and longitudinally. This study examines the extent to which variation in oppositional symptoms predict, variation in depressive symptoms over time, accounting for co-occurring depressive symptoms and measurement error.…

  17. Longitudinal Rater Modeling with Splines

    ERIC Educational Resources Information Center

    Dobria, Lidia

    2011-01-01

    Performance assessments rely on the expert judgment of raters for the measurement of the quality of responses, and raters unavoidably introduce error in the scoring process. Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, even after accounting for differences in examinee…

  18. POD-based constrained sensor placement and field reconstruction from noisy wind measurements: A perturbation study

    DOE PAGES

    Zhang, Zhongqiang; Yang, Xiu; Lin, Guang

    2016-04-14

    Sensor placement at the extrema of Proper Orthogonal Decomposition (POD) is efficient and leads to accurate reconstruction of the wind field from a limited number of measure- ments. In this paper we extend this approach of sensor placement and take into account measurement errors and detect possible malfunctioning sensors. We use the 48 hourly spa- tial wind field simulation data sets simulated using the Weather Research an Forecasting (WRF) model applied to the Maine Bay to evaluate the performances of our methods. Specifically, we use an exclusion disk strategy to distribute sensors when the extrema of POD modes are close.more » It turns out that this strategy can also reduce the error of recon- struction from noise measurements. Also, by a cross-validation technique, we successfully locate the malfunctioning sensors.« less

  19. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  20. Strategies for Detecting and Correcting Errors in Accounting Problems.

    ERIC Educational Resources Information Center

    James, Marianne L.

    2003-01-01

    Reviews common errors in accounting tests that students commit resulting from deficiencies in fundamental prior knowledge, ineffective test taking, and inattention to detail and provides solutions to the problems. (JOW)

  1. A bayesian approach for determining velocity and uncertainty estimates from seismic cone penetrometer testing or vertical seismic profiling data

    USGS Publications Warehouse

    Pidlisecky, Adam; Haines, S.S.

    2011-01-01

    Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.

  2. Global horizontal irradiance clear sky models : implementation and analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.

    2012-03-01

    Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less

  3. Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  4. Impact of a reengineered electronic error-reporting system on medication event reporting and care process improvements at an urban medical center.

    PubMed

    McKaig, Donald; Collins, Christine; Elsaid, Khaled A

    2014-09-01

    A study was conducted to evaluate the impact of a reengineered approach to electronic error reporting at a 719-bed multidisciplinary urban medical center. The main outcome of interest was the monthly reported medication errors during the preimplementation (20 months) and postimplementation (26 months) phases. An interrupted time series analysis was used to describe baseline errors, immediate change following implementation of the current electronic error-reporting system (e-ERS), and trend of error reporting during postimplementation. Errors were categorized according to severity using the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Medication Error Index classifications. Reported errors were further analyzed by reporter and error site. During preimplementation, the monthly reported errors mean was 40.0 (95% confidence interval [CI]: 36.3-43.7). Immediately following e-ERS implementation, monthly reported errors significantly increased by 19.4 errors (95% CI: 8.4-30.5). The change in slope of reported errors trend was estimated at 0.76 (95% CI: 0.07-1.22). Near misses and no-patient-harm errors accounted for 90% of all errors, while errors that caused increased patient monitoring or temporary harm accounted for 9% and 1%, respectively. Nurses were the most frequent reporters, while physicians were more likely to report high-severity errors. Medical care units accounted for approximately half of all reported errors. Following the intervention, there was a significant increase in reporting of prevented errors and errors that reached the patient with no resultant harm. This improvement in reporting was sustained for 26 months and has contributed to designing and implementing quality improvement initiatives to enhance the safety of the medication use process.

  5. Beyond the Mechanics of Spreadsheets: Using Design Instruction to Address Spreadsheet Errors

    ERIC Educational Resources Information Center

    Schneider, Kent N.; Becker, Lana L.; Berg, Gary G.

    2017-01-01

    Given that the usage and complexity of spreadsheets in the accounting profession are expected to increase, it is more important than ever to ensure that accounting graduates are aware of the dangers of spreadsheet errors and are equipped with design skills to minimize those errors. Although spreadsheet mechanics are prevalent in accounting…

  6. Multiparameter measurement utilizing poloidal polarimeter for burning plasma reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2014-08-21

    The authors have made the basic and applied research on the polarimeter for plasma diagnostics. Recently, the authors have proposed an application of multiparameter measurement (magnetic field, B, electron density, n{sub e}, electron temperature, T{sub e}, and total plasma current, I{sub p}) utilizing polarimeter to future fusion reactors. In this proceedings, the brief review of the polarimeter, the principle of the multiparameter measurement and the progress of the research on the multiparameter measurement are explained. The measurement method that the authors have proposed is suitable for the reactor for the following reasons; multiparameters can be obtained from a small numbermore » of diagnostics, the proposed method does not depend on time-history, and far-infrared light utilized by the polarimeter is less sensitive to degradation of of optical components. Taking into account the measuring error, performance assessment of the proposed method was carried. Assuming that the error of Δθ and Δε were 0.1° and 0.6°, respectively, the error of reconstructed j{sub φ}, n{sub e} and T{sub e} were 12 %, 8.4 % and 31 %, respectively. This study has shown that the reconstruction error can be decreased by increasing the number of the wavelength of the probing laser and by increasing the number of the viewing chords. For example, By increasing the number of viewing chords to forty-five, the error of j{sub φ}, n{sub e} and T{sub e} were reduced to 4.4 %, 4.4 %, and 17 %, respectively.« less

  7. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR SO2 Allowance... her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within 10...

  8. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Hoskinson; R C. Rope; L G. Blackwood

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less

  9. Accurate prediction of retention in hydrophilic interaction chromatography by back calculation of high pressure liquid chromatography gradient profiles.

    PubMed

    Wang, Nu; Boswell, Paul G

    2017-10-20

    Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase we used, column distortion is an important factor to take into account in retention projection in HILIC that is not usually important in RPLC. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Exploring the Relationship of Task Performance and Physical and Cognitive Fatigue During a Daylong Light Precision Task.

    PubMed

    Yung, Marcus; Manji, Rahim; Wells, Richard P

    2017-11-01

    Our aim was to explore the relationship between fatigue and operation system performance during a simulated light precision task over an 8-hr period using a battery of physical (central and peripheral) and cognitive measures. Fatigue may play an important role in the relationship between poor ergonomics and deficits in quality and productivity. However, well-controlled laboratory studies in this area have several limitations, including the lack of work relevance of fatigue exposures and lack of both physical and cognitive measures. There remains a need to understand the relationship between physical and cognitive fatigue and task performance at exposure levels relevant to realistic production or light precision work. Errors and fatigue measures were tracked over the course of a micropipetting task. Fatigue responses from 10 measures and errors in pipetting technique, precision, and targeting were submitted to principal component analysis to descriptively analyze features and patterns. Fatigue responses and error rates contributed to three principal components (PCs), accounting for 50.9% of total variance. Fatigue responses grouped within the three PCs reflected central and peripheral upper extremity fatigue, postural sway, and changes in oculomotor behavior. In an 8-hr light precision task, error rates shared similar patterns to both physical and cognitive fatigue responses, and/or increases in arousal level. The findings provide insight toward the relationship between fatigue and operation system performance (e.g., errors). This study contributes to a body of literature documenting task errors and fatigue, reflecting physical (both central and peripheral) and cognitive processes.

  11. Robustness of reliable change indices to variability in Parkinson's disease with mild cognitive impairment.

    PubMed

    Turner, T H; Renfroe, J B; Elm, J; Duppstadt-Delambo, A; Hinson, V K

    2016-01-01

    Ability to identify change is crucial for measuring response to interventions and tracking disease progression. Beyond psychometrics, investigations of Parkinson's disease with mild cognitive impairment (PD-MCI) must consider fluctuating medication, motor, and mental status. One solution is to employ 90% reliable change indices (RCIs) from test manuals to account for account measurement error and practice effects. The current study examined robustness of 90% RCIs for 19 commonly used executive function tests in 14 PD-MCI subjects assigned to the placebo arm of a 10-week randomized controlled trial of atomoxetine in PD-MCI. Using 90% RCIs, the typical participant showed spurious improvement on one measure, and spurious decline on another. Reliability estimates from healthy adults standardization samples and PD-MCI were similar. In contrast to healthy adult samples, practice effects were minimal in this PD-MCI group. Separate 90% RCIs based on the PD-MCI sample did not further reduce error rate. In the present study, application of 90% RCIs based on healthy adults in standardization samples effectively reduced misidentification of change in a sample of PD-MCI. Our findings support continued application of 90% RCIs when using executive function tests to assess change in neurological populations with fluctuating status.

  12. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  13. Accounting for unknown foster dams in the genetic evaluation of embryo transfer progeny.

    PubMed

    Suárez, M J; Munilla, S; Cantet, R J C

    2015-02-01

    Animals born by embryo transfer (ET) are usually not included in the genetic evaluation of beef cattle for preweaning growth if the recipient dam is unknown. This is primarily to avoid potential bias in the estimation of the unknown age of dam. We present a method that allows including records of calves with unknown age of dam. Assumptions are as follows: (i) foster cows belong to the same breed being evaluated, (ii) there is no correlation between the breeding value (BV) of the calf and the maternal BV of the recipient cow, and (iii) cows of all ages are used as recipients. We examine the issue of bias for the fixed level of unknown age of dam (AOD) and propose an estimator of the effect based on classical measurement error theory (MEM) and a Bayesian approach. Using stochastic simulation under random mating or selection, the MEM estimating equations were compared with BLUP in two situations as follows: (i) full information (FI); (ii) missing AOD information on some dams. Predictions of breeding value (PBV) from the FI situation had the smallest empirical average bias followed by PBV obtained without taking measurement error into account. In turn, MEM displayed the highest bias, although the differences were small. On the other hand, MEM showed the smallest MSEP, for either random mating or selection, followed by FI, whereas ignoring measurement error produced the largest MSEP. As a consequence from the smallest MSEP with a relatively small bias, empirical accuracies of PBV were larger for MEM than those for full information, which in turn showed larger accuracies than the situation ignoring measurement error. It is concluded that MEM equations are a useful alternative for analysing weaning weight data when recipient cows are unknown, as it mitigates the effects of bias in AOD by decreasing MSEP. © 2014 Blackwell Verlag GmbH.

  14. A complete representation of uncertainties in layer-counted paleoclimatic archives

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2017-09-01

    Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.

  15. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  16. Two wrongs make a right: linear increase of accuracy of visually-guided manual pointing, reaching, and height-matching with increase in hand-to-body distance.

    PubMed

    Li, Wenxun; Matin, Leonard

    2005-03-01

    Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.

  17. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    PubMed

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  18. [Medical errors: inevitable but preventable].

    PubMed

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  19. Uses and biases of volunteer water quality data

    USGS Publications Warehouse

    Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.

    2010-01-01

    State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.

  20. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    PubMed

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  1. Argo Development Program.

    DTIC Science & Technology

    1986-06-01

    nonlinear form and account for uncertainties in model parameters, structural simplifications of the model, and disturbances. This technique summarizes...SHARPS system. *The take into account the coupling between axes two curves are nearly identical, except that the without becoming unwieldy. The low...are mainly caused by errors and control errors and accounts for the bandwidth limitations and the simulated current. observed offsets. The overshoot

  2. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  3. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making such...

  4. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.

  5. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  6. Pollution, Health, and Avoidance Behavior: Evidence from the Ports of Los Angeles

    ERIC Educational Resources Information Center

    Moretti, Enrico; Neidell, Matthew

    2011-01-01

    A pervasive problem in estimating the costs of pollution is that optimizing individuals may compensate for increases in pollution by reducing their exposure, resulting in estimates that understate the full welfare costs. To account for this issue, measurement error, and environmental confounding, we estimate the health effects of ozone using daily…

  7. Leveraging constraints and biotelemetry data to pinpoint repetitively used spatial features

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Small, Robert J.

    2016-01-01

    Satellite telemetry devices collect valuable information concerning the sites visited by animals, including the location of central places like dens, nests, rookeries, or haul‐outs. Existing methods for estimating the location of central places from telemetry data require user‐specified thresholds and ignore common nuances like measurement error. We present a fully model‐based approach for locating central places from telemetry data that accounts for multiple sources of uncertainty and uses all of the available locational data. Our general framework consists of an observation model to account for large telemetry measurement error and animal movement, and a highly flexible mixture model specified using a Dirichlet process to identify the location of central places. We also quantify temporal patterns in central place use by incorporating ancillary behavioral data into the model; however, our framework is also suitable when no such behavioral data exist. We apply the model to a simulated data set as proof of concept. We then illustrate our framework by analyzing an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that exhibits fidelity to terrestrial haul‐out sites.

  8. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    NASA Astrophysics Data System (ADS)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  9. On error sources during airborne measurements of the ambient electric field

    NASA Technical Reports Server (NTRS)

    Evteev, B. F.

    1991-01-01

    The principal sources of errors during airborne measurements of the ambient electric field and charge are addressed. Results of their analysis are presented for critical survey. It is demonstrated that the volume electric charge has to be accounted for during such measurements, that charge being generated at the airframe and wing surface by droplets of clouds and precipitation colliding with the aircraft. The local effect of that space charge depends on the flight regime (air speed, altitude, particle size, and cloud elevation). Such a dependence is displayed in the relation between the collector conductivity of the aircraft discharging circuit - on one hand, and the sum of all the residual conductivities contributing to aircraft discharge - on the other. Arguments are given in favor of variability in the aircraft electric capacitance. Techniques are suggested for measuring from factors to describe the aircraft charge.

  10. Metrological Support in Technosphere Safety

    NASA Astrophysics Data System (ADS)

    Akhobadze, G. N.

    2017-11-01

    The principle of metrological support in technosphere safety is considered. It is based on the practical metrology. The theoretical aspects of accuracy and errors of the measuring instruments intended for diagnostics and control of the technosphere under the influence of factors harmful to human beings are presented. The necessity to choose measuring devices with high metrological characteristics according to the accuracy class and contact of sensitive elements with a medium under control is shown. The types of additional errors in measuring instruments that arise when they are affected by environmental influences are described. A specific example of the analyzers application to control industrial emissions and measure the oil and particulate matter in wastewater is shown; it allows assessing advantages and disadvantages of analyzers. Besides, the recommendations regarding the missing metrological characteristics of the instruments in use are provided. The technosphere continuous monitoring taking into account the metrological principles is expected to efficiently forecast the technosphere development and make appropriate decisions.

  11. Media and human capital development: Can video game playing make you smarter?

    PubMed

    Suziedelyte, Agne

    2015-04-01

    According to the literature, video game playing can improve such cognitive skills as problem solving, abstract reasoning, and spatial logic. I test this hypothesis using The Child Development Supplement to the Panel Study of Income Dynamics. The endogeneity of video game playing is addressed by using panel data methods and controlling for an extensive list of child and family characteristics. To address the measurement error in video game playing, I instrument children's weekday time use with their weekend time use. After taking into account the endogeneity and measurement error, video game playing is found to positively affect children's problem solving ability. The effect of video game playing on problem solving ability is comparable to the effect of educational activities.

  12. Media and human capital development: Can video game playing make you smarter?1

    PubMed Central

    Suziedelyte, Agne

    2015-01-01

    According to the literature, video game playing can improve such cognitive skills as problem solving, abstract reasoning, and spatial logic. I test this hypothesis using The Child Development Supplement to the Panel Study of Income Dynamics. The endogeneity of video game playing is addressed by using panel data methods and controlling for an extensive list of child and family characteristics. To address the measurement error in video game playing, I instrument children's weekday time use with their weekend time use. After taking into account the endogeneity and measurement error, video game playing is found to positively affect children's problem solving ability. The effect of video game playing on problem solving ability is comparable to the effect of educational activities. PMID:25705064

  13. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  14. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  15. Ultrasonic density measurement cell design and simulation of non-ideal effects.

    PubMed

    Higuti, Ricardo Tokio; Buiochi, Flávio; Adamowski, Júlio Cezar; de Espinosa, Francisco Montero

    2006-07-01

    This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used.

  16. 25 CFR 115.618 - What happens if at the conclusion of the notice and hearing process we decide to encumber your...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process... hearing process we decide to encumber your IIM account because of an administrative error which resulted... process we decide to encumber your IIM account because of an administrative error which resulted in funds...

  17. 25 CFR 115.618 - What happens if at the conclusion of the notice and hearing process we decide to encumber your...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process... hearing process we decide to encumber your IIM account because of an administrative error which resulted... process we decide to encumber your IIM account because of an administrative error which resulted in funds...

  18. 25 CFR 115.618 - What happens if at the conclusion of the notice and hearing process we decide to encumber your...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process... hearing process we decide to encumber your IIM account because of an administrative error which resulted... process we decide to encumber your IIM account because of an administrative error which resulted in funds...

  19. 25 CFR 115.618 - What happens if at the conclusion of the notice and hearing process we decide to encumber your...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process... hearing process we decide to encumber your IIM account because of an administrative error which resulted... process we decide to encumber your IIM account because of an administrative error which resulted in funds...

  20. 25 CFR 115.618 - What happens if at the conclusion of the notice and hearing process we decide to encumber your...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... hearing process we decide to encumber your IIM account because of an administrative error which resulted... process we decide to encumber your IIM account because of an administrative error which resulted in funds... INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process...

  1. Refractive errors in Aminu Kano Teaching Hospital, Kano Nigeria.

    PubMed

    Lawan, Abdu; Eme, Okpo

    2011-12-01

    The aim of the study is to retrospectively determine the pattern of refractive errors seen in the eye clinic of Aminu Kano Teaching Hospital, Kano-Nigeria from January to December, 2008. The clinic refraction register was used to retrieve the case folders of all patients refracted during the review period. Information extracted includes patient's age, sex, and types of refractive error. All patients had basic eye examination (to rule out other causes of subnormal vision) including intra ocular pressure measurement and streak retinoscopy at two third meter working distance. The final subjective refraction correction given to the patients was used to categorise the type of refractive error. Refractive errors was observed in 1584 patients and accounted for 26.9% of clinic attendance. There were more females than males (M: F=1.0: 1.2). The common types of refractive errors are presbyopia in 644 patients (40%), various types of astigmatism in 527 patients (33%), myopia in 216 patients (14%), hypermetropia in 171 patients (11%) and aphakia in 26 patients (2%). Refractive errors are common causes of presentation in the eye clinic. Identification and correction of refractive errors should be an integral part of eye care delivery.

  2. Assessing the Performance of Human-Automation Collaborative Planning Systems

    DTIC Science & Technology

    2011-06-01

    process- ing and incorporating vast amounts of incoming information into their solutions. How- ever, these algorithms are brittle and unable to account for...planning system, a descriptive Mission Performance measure may address the total travel time on the path or the cost of the path (e.g. total work...minimizing costs or collisions [4, 32, 33]. Error measures for such a path planning system may track how many collisions occur or how much threat

  3. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  4. Efficiency of coherent-state quantum cryptography in the presence of loss: Influence of realistic error correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2006-05-15

    We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.

  5. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  6. The error-related negativity as a state and trait measure: motivation, personality, and ERPs in response to errors.

    PubMed

    Pailing, Patricia E; Segalowitz, Sidney J

    2004-01-01

    This study examines changes in the error-related negativity (ERN/Ne) related to motivational incentives and personality traits. ERPs were gathered while adults completed a four-choice letter task during four motivational conditions. Monetary incentives for finger and hand accuracy were altered across motivation conditions to either be equal or favor one type of accuracy over the other in a 3:1 ratio. Larger ERN/Ne amplitudes were predicted with increased incentives, with personality moderating this effect. Results were as expected: Individuals higher on conscientiousness displayed smaller motivation-related changes in the ERN/Ne. Similarly, those low on neuroticism had smaller effects, with the effect of Conscientiousness absent after accounting for Neuroticism. These results emphasize an emotional/evaluative function for the ERN/Ne, and suggest that the ability to selectively invest in error monitoring is moderated by underlying personality.

  7. The Swiss cheese model of adverse event occurrence--Closing the holes.

    PubMed

    Stein, James E; Heiss, Kurt

    2015-12-01

    Traditional surgical attitude regarding error and complications has focused on individual failings. Human factors research has brought new and significant insights into the occurrence of error in healthcare, helping us identify systemic problems that injure patients while enhancing individual accountability and teamwork. This article introduces human factors science and its applicability to teamwork, surgical culture, medical error, and individual accountability. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Expensing stock options: a fair-value approach.

    PubMed

    Kaplan, Robert S; Palepu, Krishna G

    2003-12-01

    Now that companies such as General Electric and Citigroup have accepted the premise that employee stock options are an expense, the debate is shifting from whether to report options on income statements to how to report them. The authors present a new accounting mechanism that maintains the rationale underlying stock option expensing while addressing critics' concerns about measurement error and the lack of reconciliation to actual experience. A procedure they call fair-value expensing adjusts and eventually reconciles cost estimates made at grant date with subsequent changes in the value of the options, and it does so in a way that eliminates forecasting and measurement errors over time. The method captures the chief characteristic of stock option compensation--that employees receive part of their compensation in the form of a contingent claim on the value they are helping to produce. The mechanism involves creating entries on both the asset and equity sides of the balance sheet. On the asset side, companies create a prepaid-compensation account equal to the estimated cost of the options granted; on the owners'-equity side, they create a paid-in capital stock-option account for the same amount. The prepaid-compensation account is then expensed through the income statement, and the stock option account is adjusted on the balance sheet to reflect changes in the estimated fair value of the granted options. The amortization of prepaid compensation is added to the change in the option grant's value to provide the total reported expense of the options grant for the year. At the end of the vesting period, the company uses the fair value of the vested option to make a final adjustment on the income statement to reconcile any difference between that fair value and the total of the amounts already reported.

  9. Effect of cephalometer misalignment on calculations of facial asymmetry.

    PubMed

    Lee, Ki-Heon; Hwang, Hyeon-Shik; Curry, Sean; Boyd, Robert L; Norris, Kevin; Baumrind, Sheldon

    2007-07-01

    In this study, we evaluated errors introduced into the interpretation of facial asymmetry on posteroanterior (PA) cephalograms due to malpositioning of the x-ray emitter focal spot. We tested the hypothesis that horizontal displacements of the emitter from its ideal position would produce systematic displacements of skull landmarks that could be fully accounted for by the rules of projective geometry alone. A representative dry skull with 22 metal markers was used to generate a series of PA images from different emitter positions by using a fully calibrated stereo cephalometer. Empirical measurements of the resulting cephalograms were compared with mathematical predictions based solely on geometric rules. The empirical measurements matched the mathematical predictions within the limits of measurement error (x= 0.23 mm), thus supporting the hypothesis. Based upon this finding, we generated a completely symmetrical mathematical skull and calculated the expected errors for focal spots of several different magnitudes. Quantitative data were computed for focal spot displacements of different magnitudes. Misalignment of the x-ray emitter focal spot introduces systematic errors into the interpretation of facial asymmetry on PA cephalograms. For misalignments of less than 20 mm, the effect is small in individual cases. However, misalignments as small as 10 mm can introduce spurious statistical findings of significant asymmetry when mean values for large groups of PA images are evaluated.

  10. Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor

    NASA Astrophysics Data System (ADS)

    Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.

    2018-03-01

    Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.

  11. Correcting for deformation in skin-based marker systems.

    PubMed

    Alexander, E J; Andriacchi, T P

    2001-03-01

    A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.

  12. Estimating Teacher Effectiveness from Two-Year Changes in Students' Test Scores

    ERIC Educational Resources Information Center

    Leigh, Andrew

    2010-01-01

    Using a dataset covering over 10,000 Australian school teachers and over 90,000 pupils, I estimate how effective teachers are in raising students' test scores. Since the exams are biennial, it is necessary to take account of the teacher's work in the intervening year. Even adjusting for measurement error, the teacher fixed effects are widely…

  13. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…

  14. The Transition From Event Reports to Measurable Organizational Impact: Workshop Proceedings Report

    DTIC Science & Technology

    2014-03-01

    Airlines Colin Drury - - - - - - - - - - - - - Applied Ergonomics Douglas Farrow - - - - - - - - - - - - - - - FAA AFS-280 Terry Gober... cost , at their Seattle location. Also, Boeing supports presentations regarding MEDA at international conferences, which greatly increased the number...approach to a true safety culture involves human factors and error management training that includes management; a Just Policy and accountability , an

  15. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  16. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  17. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  18. Variational Assimilation of GOME Total-Column Ozone Satellite Data in a 2D Latitude-Longitude Tracer-Transport Model.

    NASA Astrophysics Data System (ADS)

    Eskes, H. J.; Piters, A. J. M.; Levelt, P. F.; Allaart, M. A. F.; Kelder, H. M.

    1999-10-01

    A four-dimensional data-assimilation method is described to derive synoptic ozone fields from total-column ozone satellite measurements. The ozone columns are advected by a 2D tracer-transport model, using ECMWF wind fields at a single pressure level. Special attention is paid to the modeling of the forecast error covariance and quality control. The temporal and spatial dependence of the forecast error is taken into account, resulting in a global error field at any instant in time that provides a local estimate of the accuracy of the assimilated field. The authors discuss the advantages of the 4D-variational (4D-Var) approach over sequential assimilation schemes. One of the attractive features of the 4D-Var technique is its ability to incorporate measurements at later times t > t0 in the analysis at time t0, in a way consistent with the time evolution as described by the model. This significantly improves the offline analyzed ozone fields.

  19. Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5

    NASA Technical Reports Server (NTRS)

    Schott, John R.; Volchok, William J.; Biegel, Joseph D.

    1986-01-01

    The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.

  20. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation

    PubMed Central

    Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier

    2017-01-01

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622

  1. Calibration of Safecast dose rate measurements.

    PubMed

    Cervone, Guido; Hultquist, Carolynne

    2018-10-01

    A methodology is presented to calibrate contributed Safecast dose rate measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the U.S. government and contributed datasets at specific temporal windows and at corresponding spatial locations. The coefficients found for all the different temporal windows are aggregated and interpolated using quadratic regressions to generate a time dependent calibration function. Normal background radiation, decay rates, and missing values are taken into account during the analysis. Results show that the standard Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing magnitudes with time. A model is created to predict the ratio of the isotopes from the time of the accident through 2020. The proposed time dependent calibration takes into account this Cesium isotopes ratio, and it is shown to reduce the error between U.S. government and contributed data. The proposed calibration is needed through 2020, after which date the errors introduced by ignoring the presence of different isotopes will become negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Are phonological influences on lexical (mis)selection the result of a monitoring bias?

    PubMed Central

    Ratinckx, Elie; Ferreira, Victor S.; Hartsuiker, Robert J.

    2009-01-01

    A monitoring bias account is often used to explain speech error patterns that seem to be the result of an interactive language production system, like phonological influences on lexical selection errors. A biased monitor is suggested to detect and covertly correct certain errors more often than others. For instance, this account predicts that errors which are phonologically similar to intended words are harder to detect than ones that are phonologically dissimilar. To test this, we tried to elicit phonological errors under the same conditions that show other kinds of lexical selection errors. In five experiments, we presented participants with high cloze probability sentence fragments followed by a picture that was either semantically related, a homophone of a semantically related word, or phonologically related to the (implicit) last word of the sentence. All experiments elicited semantic completions or homophones of semantic completions, but none elicited phonological completions. This finding is hard to reconcile with a monitoring bias account and is better explained with an interactive production system. Additionally, this finding constrains the amount of bottom-up information flow in interactive models. PMID:18942035

  3. Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

    PubMed

    Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

    2002-06-07

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  4. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    PubMed

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  5. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    USGS Publications Warehouse

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for contributions of constituents other than calcium, sodium, and chloride in dilute waters. The adjusted superposition method also accounts for the attenuation of each constituent's contribution to conductance as ionic strength increases. Use of the adjusted superposition method generally reduced predictive error to within measurement error throughout the range of specific conductance (from 37 to 51,500 ?S/cm) in the highway runoff samples. The effects of pH, temperature, and organic constituents on the relation between concentrations of dissolved constituents and measured specific conductance were examined but these properties did not substantially affect interpretation of the Route 25 data set. Predictive abilities of the adjusted superposition method were similar to results obtained by standard regression techniques, but the adjusted superposition method has several advantages. Adjusted superposition can be applied using available published data about the constituents in precipitation, highway runoff, and the deicing chemicals applied to a highway. This semi-empirical method can be used as a predictive and diagnostic tool before a substantial number of samples are collected, but the power of the regression method is based upon a large number of water-quality analyses that may be affected by a bias in the data.

  6. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE PAGES

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; ...

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s -1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s −1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s -1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less

  7. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-01

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 m s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.

  8. 34 CFR 682.410 - Fiscal, administrative, and enforcement requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accordance with applicable legal and accounting standards; (iii) The Secretary's equitable share of... any other errors in its accounting or reporting as soon as practicable after the errors become known... guaranty agency's agreements with the Secretary; and (C) Market prices of comparable goods or services. (b...

  9. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.

  10. Estimating Relative Positions of Outer-Space Structures

    NASA Technical Reports Server (NTRS)

    Balian, Harry; Breckenridge, William; Brugarolas, Paul

    2009-01-01

    A computer program estimates the relative position and orientation of two structures from measurements, made by use of electronic cameras and laser range finders on one structure, of distances and angular positions of fiducial objects on the other structure. The program was written specifically for use in determining errors in the alignment of large structures deployed in outer space from a space shuttle. The program is based partly on equations for transformations among the various coordinate systems involved in the measurements and on equations that account for errors in the transformation operators. It computes a least-squares estimate of the relative position and orientation. Sequential least-squares estimates, acquired at a measurement rate of 4 Hz, are averaged by passing them through a fourth-order Butterworth filter. The program is executed in a computer aboard the space shuttle, and its position and orientation estimates are displayed to astronauts on a graphical user interface.

  11. Two-Photon Laser-Induced Fluorescence O and N Atoms for the Study of Heterogeneous Catalysis in a Diffusion Reactor

    NASA Technical Reports Server (NTRS)

    Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)

    1995-01-01

    Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.

  12. Effects of instrument imperfections on quantitative scanning transmission electron microscopy.

    PubMed

    Krause, Florian F; Schowalter, Marco; Grieb, Tim; Müller-Caspary, Knut; Mehrtens, Thorsten; Rosenauer, Andreas

    2016-02-01

    Several instrumental imperfections of transmission electron microscopes are characterized and their effects on the results of quantitative scanning electron microscopy (STEM) are investigated and quantified using simulations. Methods to either avoid influences of these imperfections during acquisition or to include them in reference calculations are proposed. Particularly, distortions inflicted on the diffraction pattern by an image-aberration corrector can cause severe errors of more than 20% if not accounted for. A procedure for their measurement is proposed here. Furthermore, afterglow phenomena and nonlinear behavior of the detector itself can lead to incorrect normalization of measured intensities. Single electrons accidentally impinging on the detector are another source of error but can also be exploited for threshold-less calibration of STEM images to absolute dose, incident beam current determination and measurement of the detector sensitivity. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Accounting for spatial variation of trabecular anisotropy with subject-specific finite element modeling moderately improves predictions of local subchondral bone stiffness at the proximal tibia.

    PubMed

    Nazemi, S Majid; Kalajahi, S Mehrdad Hosseini; Cooper, David M L; Kontulainen, Saija A; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-07-05

    Previously, a finite element (FE) model of the proximal tibia was developed and validated against experimentally measured local subchondral stiffness. This model indicated modest predictions of stiffness (R 2 =0.77, normalized root mean squared error (RMSE%)=16.6%). Trabecular bone though was modeled with isotropic material properties despite its orthotropic anisotropy. The objective of this study was to identify the anisotropic FE modeling approach which best predicted (with largest explained variance and least amount of error) local subchondral bone stiffness at the proximal tibia. Local stiffness was measured at the subchondral surface of 13 medial/lateral tibial compartments using in situ macro indentation testing. An FE model of each specimen was generated assuming uniform anisotropy with 14 different combinations of cortical- and tibial-specific density-modulus relationships taken from the literature. Two FE models of each specimen were also generated which accounted for the spatial variation of trabecular bone anisotropy directly from clinical CT images using grey-level structure tensor and Cowin's fabric-elasticity equations. Stiffness was calculated using FE and compared to measured stiffness in terms of R 2 and RMSE%. The uniform anisotropic FE model explained 53-74% of the measured stiffness variance, with RMSE% ranging from 12.4 to 245.3%. The models which accounted for spatial variation of trabecular bone anisotropy predicted 76-79% of the variance in stiffness with RMSE% being 11.2-11.5%. Of the 16 evaluated finite element models in this study, the combination of Synder and Schneider (for cortical bone) and Cowin's fabric-elasticity equations (for trabecular bone) best predicted local subchondral bone stiffness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Using direct numerical simulation to improve experimental measurements of inertial particle radial relative velocities

    NASA Astrophysics Data System (ADS)

    Ireland, Peter J.; Collins, Lance R.

    2012-11-01

    Turbulence-induced collision of inertial particles may contribute to the rapid onset of precipitation in warm cumulus clouds. The particle collision frequency is determined from two parameters: the radial distribution function g (r) and the mean inward radial relative velocity . These quantities have been measured in three dimensions computationally, using direct numerical simulation (DNS), and experimentally, using digital holographic particle image velocimetry (DHPIV). While good quantitative agreement has been attained between computational and experimental measures of g (r) (Salazar et al. 2008), measures of wr have not reached that stage (de Jong et al. 2010). We apply DNS to mimic the experimental image analysis used in the relative velocity measurement. To account for experimental errors, we add noise to the particle positions and `measure' the velocity from these positions. Our DNS shows that the experimental errors are inherent to the DHPIV setup, and so we explore an alternate approach, in which velocities are measured along thin two-dimensional planes using standard PIV. We show that this technique better recovers the correct radial relative velocity PDFs and suggest optimal parameter ranges for the experimental measurements.

  15. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  16. Measurements of evaporated perfluorocarbon during partial liquid ventilation by a zeolite absorber.

    PubMed

    Proquitté, Hans; Rüdiger, Mario; Wauer, Roland R; Schmalisch, Gerd

    2004-01-01

    During partial liquid ventilation (PLV) the knowledge of the quantity of exhaled perfluorocarbon (PFC) allows a continuous substitution of the PFC loss to achieve a constant PFC level in the lungs. The aim of our in vitro study was to determine the PFC loss in the mixed expired gas by an absorber and to investigate the effect of the evaporated PFC on ventilatory measurements. To simulate the PFC loss during PLV, a heated flask was rinsed with a constant airflow of 4 L min(-1) and PFC was infused by different speeds (5, 10, 20 mL h(-1)). An absorber filled with PFC selective zeolites was connected with the flask to measure the PFC in the gas. The evaporated PFC volume and the PFC concentration were determined from the weight gain of the absorber measured by an electronic scale. The PFC-dependent volume error of the CO2SMO plus neonatal pneumotachograph was measured by manual movements of a syringe with volumes of 10 and 28 mL with a rate of 30 min(-1). Under steady state conditions there was a strong correlation (r2 = 0.999) between the infusion speed of PFC and the calculated PFC flow rate. The PFC flow rate was slightly underestimated by 4.3% (p < 0.01). However, this bias was independent from PFC infusion rate. The evaporated PFC volume was precisely measured with errors < 1%. The volume error of the CO2SMO-Plus pneumotachograph increased with increasing PFC content for both tidal volumes (p < 0.01). However for PFC flow rates up to 20 mL/h the error of the measured tidal volumes was < 5%. PFC selective zeolites can be used to quantify accurately the evaporated PFC volume during PLV. With increasing PFC concentrations in the exhaled air the measurement errors of ventilatory parameters have to be taken into account.

  17. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  18. Accounting for nonsampling error in estimates of HIV epidemic trends from antenatal clinic sentinel surveillance

    PubMed Central

    Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801

  19. Sensitivity of regression calibration to non-perfect validation data with application to the Norwegian Women and Cancer Study.

    PubMed

    Buonaccorsi, John P; Dalen, Ingvild; Laake, Petter; Hjartåker, Anette; Engeset, Dagrun; Thoresen, Magne

    2015-04-15

    Measurement error occurs when we observe error-prone surrogates, rather than true values. It is common in observational studies and especially so in epidemiology, in nutritional epidemiology in particular. Correcting for measurement error has become common, and regression calibration is the most popular way to account for measurement error in continuous covariates. We consider its use in the context where there are validation data, which are used to calibrate the true values given the observed covariates. We allow for the case that the true value itself may not be observed in the validation data, but instead, a so-called reference measure is observed. The regression calibration method relies on certain assumptions.This paper examines possible biases in regression calibration estimators when some of these assumptions are violated. More specifically, we allow for the fact that (i) the reference measure may not necessarily be an 'alloyed gold standard' (i.e., unbiased) for the true value; (ii) there may be correlated random subject effects contributing to the surrogate and reference measures in the validation data; and (iii) the calibration model itself may not be the same in the validation study as in the main study; that is, it is not transportable. We expand on previous work to provide a general result, which characterizes potential bias in the regression calibration estimators as a result of any combination of the violations aforementioned. We then illustrate some of the general results with data from the Norwegian Women and Cancer Study. Copyright © 2015 John Wiley & Sons, Ltd.

  20. The Effects of Rainfall Inhomogeneity on Climate Variability of Rainfall Estimated from Passive Microwave Sensors

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Poyner, Philip; Berg, Wesley; Thomas-Stahle, Jody

    2007-01-01

    Passive microwave rainfall estimates that exploit the emission signal of raindrops in the atmosphere are sensitive to the inhomogeneity of rainfall within the satellite field of view (FOV). In particular, the concave nature of the brightness temperature (T(sub b)) versus rainfall relations at frequencies capable of detecting the blackbody emission of raindrops cause retrieval algorithms to systematically underestimate precipitation unless the rainfall is homogeneous within a radiometer FOV, or the inhomogeneity is accounted for explicitly. This problem has a long history in the passive microwave community and has been termed the beam-filling error. While not a true error, correcting for it requires a priori knowledge about the actual distribution of the rainfall within the satellite FOV, or at least a statistical representation of this inhomogeneity. This study first examines the magnitude of this beam-filling correction when slant-path radiative transfer calculations are used to account for the oblique incidence of current radiometers. Because of the horizontal averaging that occurs away from the nadir direction, the beam-filling error is found to be only a fraction of what has been reported previously in the literature based upon plane-parallel calculations. For a FOV representative of the 19-GHz radiometer channel (18 km X 28 km) aboard the Tropical Rainfall Measuring Mission (TRMM), the mean beam-filling correction computed in this study for tropical atmospheres is 1.26 instead of 1.52 computed from plane-parallel techniques. The slant-path solution is also less sensitive to finescale rainfall inhomogeneity and is, thus, able to make use of 4-km radar data from the TRMM Precipitation Radar (PR) in order to map regional and seasonal distributions of observed rainfall inhomogeneity in the Tropics. The data are examined to assess the expected errors introduced into climate rainfall records by unresolved changes in rainfall inhomogeneity. Results show that global mean monthly errors introduced by not explicitly accounting for rainfall inhomogeneity do not exceed 0.5% if the beam-filling error is allowed to be a function of rainfall rate and freezing level and does not exceed 2% if a universal beam-filling correction is applied that depends only upon the freezing level. Monthly regional errors can be significantly larger. Over the Indian Ocean, errors as large as 8% were found if the beam-filling correction is allowed to vary with rainfall rate and freezing level while errors of 15% were found if a universal correction is used.

  1. [Comparative quality measurements part 3: funnel plots].

    PubMed

    Kottner, Jan; Lahmann, Nils

    2014-02-01

    Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.

  2. Stability of continuous-time quantum filters with measurement imperfections

    NASA Astrophysics Data System (ADS)

    Amini, H.; Pellegrini, C.; Rouchon, P.

    2014-07-01

    The fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is shown to be always a submartingale. The observed system is assumed to be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes and that takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems and where the measurement imperfections are modelled by a left stochastic matrix.

  3. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  4. Oak soil-site relationships in northwestern West Virginia

    Treesearch

    L.R. Auchmoody; H. Clay Smith

    1979-01-01

    An oak soil-site productivity equation was developed for the well-drained, upland soils in the northwestern portion of West Virginia adjacent to the Ohio River. The equation uses five easily measured soil and topographic variables and average precipitation to predict site index. It accounts for 69 percent of the variation in oak site index and has a standard error of 4...

  5. Observations of large parallel electric fields in the auroral ionosphere

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    Rocket borne measurements employing a double probe technique were used to gather evidence for the existence of electric fields in the auroral ionosphere having components parallel to the magnetic field direction. An analysis of possible experimental errors leads to the conclusion that no known uncertainties can account for the roughly 10 mV/m parallel electric fields that are observed.

  6. Using Generalizability Theory to Examine Sources of Variance in Observed Behaviors within High School Classrooms

    ERIC Educational Resources Information Center

    Abry, Tashia; Cash, Anne H.; Bradshaw, Catherine P.

    2014-01-01

    Generalizability theory (GT) offers a useful framework for estimating the reliability of a measure while accounting for multiple sources of error variance. The purpose of this study was to use GT to examine multiple sources of variance in and the reliability of school-level teacher and high school student behaviors as observed using the tool,…

  7. Kalman Filter for Spinning Spacecraft Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Sedlak, Joseph E.

    2008-01-01

    This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.

  8. A model-independent comparison of the rates of uptake and short term retention of 47Ca and 85Sr by the skeleton.

    PubMed

    Reeve, J; Hesp, R

    1976-12-22

    1. A method has been devised for comparing the impulse response functions of the skeleton for two or more boneseeking tracers, and for estimating the contribution made by measurement errors to the differences between any pair of impulse response functions. 2. Comparisons were made between the calculated impulse response functions for 47Ca and 85Sr obtained in simultaneous double tracer studies in sixteen subjects. Collectively the differences between the 47Ca and 85Sr functions could be accounted for entirely by measurement errors. 3. Because the calculation of an impulse response function requires fewer a priori assumptions than other forms of mathematical analysis, and automatically corrects for differences induced by recycling of tracer and non-identical rates of excretory plasma clearance of tracer, it is concluded that differences shown in previous in vivo studies between the fluxes of Ca and Sr into bone can be fully accounted for by undetermined oversimplifications in the various mathematical models used to analyse the results of those studies. 85Sr is therefore an adequate tracer for bone calcium in most in vivo studies.

  9. Error function attack of chaos synchronization based encryption schemes.

    PubMed

    Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu

    2004-03-01

    Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.

  10. Predicting Error Bars for QSAR Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeter, Timon; Technische Universitaet Berlin, Department of Computer Science, Franklinstrasse 28/29, 10587 Berlin; Schwaighofer, Anton

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniquesmore » for the other modelling approaches.« less

  11. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  12. Treatment mechanism in the MRC preschool autism communication trial: implications for study design and parent-focussed therapy for children.

    PubMed

    Pickles, Andrew; Harris, Victoria; Green, Jonathan; Aldred, Catherine; McConachie, Helen; Slonims, Vicky; Le Couteur, Ann; Hudry, Kristelle; Charman, Tony

    2015-02-01

    The PACT randomised-controlled trial evaluated a parent-mediated communication-focused treatment for children with autism, intended to reduce symptom severity as measured by a modified Autism Diagnostic Observation Schedule-Generic (ADOS-G) algorithm score. The therapy targeted parental behaviour, with no direct interaction between therapist and child. While nonsignificant group differences were found on ADOS-G score, significant group differences were found for both parent and child intermediate outcomes. This study aimed to better understand the mechanism by which the PACT treatment influenced changes in child behaviour though the targeted parent behaviour. Mediation analysis was used to assess the direct and indirect effects of treatment via parent behaviour on child behaviour and via child behaviour on ADOS-G score. Alternative mediation was explored to study whether the treatment effect acted as hypothesised or via another plausible pathway. Mediation models typically assume no unobserved confounding between mediator and outcome and no measurement error in the mediator. We show how to better exploit the information often available within a trial to begin to address these issues, examining scope for instrumental variable and measurement error models. Estimates of mediation changed substantially when account was taken of the confounder effects of the baseline value of the mediator and of measurement error. Our best estimates that accounted for both suggested that the treatment effect on the ADOS-G score was very substantially mediated by parent synchrony and child initiations. The results highlighted the value of repeated measurement of mediators during trials. The theoretical model underlying the PACT treatment was supported. However, the substantial fall-off in treatment effect highlighted both the need for additional data and for additional target behaviours for therapy. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  13. Numerical prediction of a draft tube flow taking into account uncertain inlet conditions

    NASA Astrophysics Data System (ADS)

    Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy

    2012-11-01

    The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.

  14. Robust cubature Kalman filter for GNSS/INS with missing observations and colored measurement noise.

    PubMed

    Cui, Bingbo; Chen, Xiyuan; Tang, Xihua; Huang, Haoqian; Liu, Xiao

    2018-01-01

    In order to improve the accuracy of GNSS/INS working in GNSS-denied environment, a robust cubature Kalman filter (RCKF) is developed by considering colored measurement noise and missing observations. First, an improved cubature Kalman filter (CKF) is derived by considering colored measurement noise, where the time-differencing approach is applied to yield new observations. Then, after analyzing the disadvantages of existing methods, the measurement augment in processing colored noise is translated into processing the uncertainties of CKF, and new sigma point update framework is utilized to account for the bounded model uncertainties. By reusing the diffused sigma points and approximation residual in the prediction stage of CKF, the RCKF is developed and its error performance is analyzed theoretically. Results of numerical experiment and field test reveal that RCKF is more robust than CKF and extended Kalman filter (EKF), and compared with EKF, the heading error of land vehicle is reduced by about 72.4%. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Multi-Method Assessment of ADHD Characteristics in Preschool Children: Relations between Measures

    PubMed Central

    Sims, Darcey M.; Lonigan, Christopher J.

    2011-01-01

    Several forms of assessment tools, including behavioral rating scales and objective tests such as the Continuous Performance Test (CPT), can be used to measure inattentive and hyperactive/impulsive behaviors associated with Attention-Deficit/Hyperactivity Disorder (ADHD). However, research with school-age children has shown that the correlations between parent ratings, teacher ratings, and scores on objective measures of ADHD-characteristic behaviors are modest at best. In this study, we examined the relations between parent and teacher ratings of ADHD and CPT scores in a sample of 65 preschoolers ranging from 50 to 72 months of age. No significant associations between teacher and parent ratings of ADHD were found. Parent-ratings of both inattention and hyperactivity/impulsivity accounted for variance in CPT omission errors but not CPT commission errors. Teacher ratings showed evidence of convergent and discriminant validity when entered simultaneously in a hierarchical regression. These tools may be measuring different aspects of inattention and hyperactivity/impulsivity. PMID:22518069

  16. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.

  17. A framework for simulating map error in ecosystem models

    Treesearch

    Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard

    2014-01-01

    The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...

  18. Estimating Effects of Multipath Propagation on GPS Signals

    NASA Technical Reports Server (NTRS)

    Byun, Sung; Hajj, George; Young, Lawrence

    2005-01-01

    Multipath Simulator Taking into Account Reflection and Diffraction (MUSTARD) is a computer program that simulates effects of multipath propagation on received Global Positioning System (GPS) signals. MUSTARD is a very efficient means of estimating multipath-induced position and phase errors as functions of time, given the positions and orientations of GPS satellites, the GPS receiver, and any structures near the receiver as functions of time. MUSTARD traces each signal from a GPS satellite to the receiver, accounting for all possible paths the signal can take, including all paths that include reflection and/or diffraction from surfaces of structures near the receiver and on the satellite. Reflection and diffraction are modeled by use of the geometrical theory of diffraction. The multipath signals are added to the direct signal after accounting for the gain of the receiving antenna. Then, in a simulation of a delay-lock tracking loop in the receiver, the multipath-induced range and phase errors as measured by the receiver are estimated. All of these computations are performed for both right circular polarization and left circular polarization of both the L1 (1.57542-GHz) and L2 (1.2276-GHz) GPS signals.

  19. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.

  20. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  1. Improved tests of extra-dimensional physics and thermal quantum field theory from new Casimir force measurements

    NASA Astrophysics Data System (ADS)

    Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.

    2003-12-01

    We report new constraints on extra-dimensional models and other physics beyond the standard model based on measurements of the Casimir force between two dissimilar metals for separations in the range 0.2 1.2 μm. The Casimir force between a Au-coated sphere and a Cu-coated plate of a microelectromechanical torsional oscillator was measured statically with an absolute error of 0.3 pN. In addition, the Casimir pressure between two parallel plates was determined dynamically with an absolute error of ≈0.6 mPa. Within the limits of experimental and theoretical errors, the results are in agreement with a theory that takes into account the finite conductivity and roughness of the two metals. The level of agreement between experiment and theory was then used to set limits on the predictions of extra-dimensional physics and thermal quantum field theory. It is shown that two theoretical approaches to the thermal Casimir force which predict effects linear in temperature are ruled out by these experiments. Finally, constraints on Yukawa corrections to Newton’s law of gravity are strengthened by more than an order of magnitude in the range 56 330 nm.

  2. Noncalcified Lung Nodules: Volumetric Assessment with Thoracic CT

    PubMed Central

    Gavrielides, Marios A.; Kinnard, Lisa M.; Myers, Kyle J.; Petrick, Nicholas

    2009-01-01

    Lung nodule volumetry is used for nodule diagnosis, as well as for monitoring tumor response to therapy. Volume measurement precision and accuracy depend on a number of factors, including image-acquisition and reconstruction parameters, nodule characteristics, and the performance of algorithms for nodule segmentation and volume estimation. The purpose of this article is to provide a review of published studies relevant to the computed tomographic (CT) volumetric analysis of lung nodules. A number of underexamined areas of research regarding volumetric accuracy are identified, including the measurement of nonsolid nodules, the effects of pitch and section overlap, and the effect of respiratory motion. The need for public databases of phantom scans, as well as of clinical data, is discussed. The review points to the need for continued research to examine volumetric accuracy as a function of a multitude of interrelated variables involved in the assessment of lung nodules. Understanding and quantifying the sources of volumetric measurement error in the assessment of lung nodules with CT would be a first step toward the development of methods to minimize that error through system improvements and to correctly account for any remaining error. © RSNA, 2009 PMID:19332844

  3. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  4. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  5. What is the acceptable hemolysis index for the measurements of plasma potassium, LDH and AST?

    PubMed

    Rousseau, Nathalie; Pige, Raphaëlle; Cohen, Richard; Pecquet, Matthieu

    2016-06-01

    Hemolysis is a cause of variability in test results for plasma potassium, LDH and AST and is a non-negligible part of measurement uncertainty. However, allowable levels of hemolysis provided by reagent suppliers take neither analytical variability (trueness and precision) nor the measurand into account. Using a calibration range of hemolysis, we measured the plasma concentrations of potassium, LDH and AST, and hemolysis indices with a Cobas C501 analyzer (Roche Diagnostics(®), Meylan, France). Based on the allowable total error (according to Ricós et al.) and the expanded measurement uncertainty equation we calculated the maximum allowable bias for two concentrations of each measurand. Finally, we determined the allowable hemolysis indices for all three measurands. We observed a linear relationship between the observed increases of concentration and hemolysis indices. The LDH measurement was the most sensitive to hemolysis, followed by AST and potassium measurements. The determination of the allowable hemolysis index depends on the targeted measurand, its concentration and the chosen level of requirement of allowable total error.

  6. Effect of the mandible on mouthguard measurements of head kinematics.

    PubMed

    Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B

    2016-06-14

    Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A test of a linear model of glaucomatous structure-function loss reveals sources of variability in retinal nerve fiber and visual field measurements.

    PubMed

    Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H

    2009-09-01

    Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.

  8. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  9. Performance Data Errors in Air Carrier Operations: Causes and Countermeasures

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.

    2012-01-01

    Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.

  10. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  11. The solvability of quantum k-pair network in a measurement-based way.

    PubMed

    Li, Jing; Xu, Gang; Chen, Xiu-Bo; Qu, Zhiguo; Niu, Xin-Xin; Yang, Yi-Xian

    2017-12-01

    Network coding is an effective means to enhance the communication efficiency. The characterization of network solvability is one of the most important topic in this field. However, for general network, the solvability conditions are still a challenge. In this paper, we consider the solvability of general quantum k-pair network in measurement-based framework. For the first time, a detailed account of measurement-based quantum network coding(MB-QNC) is specified systematically. Differing from existing coding schemes, single qubit measurements on a pre-shared graph state are the only allowed coding operations. Since no control operations are concluded, it makes MB-QNC schemes more feasible. Further, the sufficient conditions formulating by eigenvalue equations and stabilizer matrix are presented, which build an unambiguous relation among the solvability and the general network. And this result can also analyze the feasibility of sharing k EPR pairs task in large-scale networks. Finally, in the presence of noise, we analyze the advantage of MB-QNC in contrast to gate-based way. By an instance network [Formula: see text], we show that MB-QNC allows higher error thresholds. Specially, for X error, the error threshold is about 30% higher than 10% in gate-based way. In addition, the specific expressions of fidelity subject to some constraint conditions are given.

  12. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  13. Probability misjudgment, cognitive ability, and belief in the paranormal.

    PubMed

    Musch, Jochen; Ehrenberg, Katja

    2002-05-01

    According to the probability misjudgment account of paranormal belief (Blackmore & Troscianko, 1985), believers in the paranormal tend to wrongly attribute remarkable coincidences to paranormal causes rather than chance. Previous studies have shown that belief in the paranormal is indeed positively related to error rates in probabilistic reasoning. General cognitive ability could account for a relationship between these two variables without assuming a causal role of probabilistic reasoning in the forming of paranormal beliefs, however. To test this alternative explanation, a belief in the paranormal scale (BPS) and a battery of probabilistic reasoning tasks were administered to 123 university students. Confirming previous findings, a significant correlation between BPS scores and error rates in probabilistic reasoning was observed. This relationship disappeared, however, when cognitive ability as measured by final examination grades was controlled for. Lower cognitive ability correlated substantially with belief in the paranormal. This finding suggests that differences in general cognitive performance rather than specific probabilistic reasoning skills provide the basis for paranormal beliefs.

  14. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  15. 34 CFR 668.95 - Reimbursements, refunds, and offsets.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... institution's violation in paragraph (a) of this section results from an administrative, accounting, or recordkeeping error, and that error was not part of a pattern of error, and there is no evidence of fraud or...

  16. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  17. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  18. Detecting errors and anomalies in computerized materials control and accountability databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteson, R.; Hench, K.; Yarbro, T.

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines thesemore » large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.« less

  19. An Item Fit Statistic Based on Pseudocounts from the Generalized Graded Unfolding Model: A Preliminary Report.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Stone and colleagues (C. Stone, R. Ankenman, S. Lane, and M. Liu, 1993; C. Stone, R. Mislevy and J. Mazzeo, 1994; C. Stone, 2000) have proposed a fit index that explicitly accounts for the measurement error inherent in an estimated theta value, here called chi squared superscript 2, subscript i*. The elements of this statistic are natural…

  20. Structural Estimation of Family Labor Supply with Taxes: Estimating a Continuous Hours Model Using a Direct Utility Specification

    ERIC Educational Resources Information Center

    Heim, Bradley T.

    2009-01-01

    This paper proposes a new method for estimating family labor supply in the presence of taxes. This method accounts for continuous hours choices, measurement error, unobserved heterogeneity in tastes for work, the nonlinear form of the tax code, and fixed costs of work in one comprehensive specification. Estimated on data from the 2001 PSID, the…

  1. Mathematical Writing Errors in Expository Writings of College Mathematics Students

    ERIC Educational Resources Information Center

    Guce, Ivee K.

    2017-01-01

    Despite the efforts to confirm the effectiveness of writing in learning mathematics, analysis on common errors in mathematical writings has not received sufficient attention. This study aimed to provide an account of the students' procedural explanations in terms of their commonly committed errors in mathematical writing. Nine errors in…

  2. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Treesearch

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  3. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  4. A general framework for the regression analysis of pooled biomarker assessments.

    PubMed

    Liu, Yan; McMahan, Christopher; Gallagher, Colin

    2017-07-10

    As a cost-efficient data collection mechanism, the process of assaying pooled biospecimens is becoming increasingly common in epidemiological research; for example, pooling has been proposed for the purpose of evaluating the diagnostic efficacy of biological markers (biomarkers). To this end, several authors have proposed techniques that allow for the analysis of continuous pooled biomarker assessments. Regretfully, most of these techniques proceed under restrictive assumptions, are unable to account for the effects of measurement error, and fail to control for confounding variables. These limitations are understandably attributable to the complex structure that is inherent to measurements taken on pooled specimens. Consequently, in order to provide practitioners with the tools necessary to accurately and efficiently analyze pooled biomarker assessments, herein, a general Monte Carlo maximum likelihood-based procedure is presented. The proposed approach allows for the regression analysis of pooled data under practically all parametric models and can be used to directly account for the effects of measurement error. Through simulation, it is shown that the proposed approach can accurately and efficiently estimate all unknown parameters and is more computational efficient than existing techniques. This new methodology is further illustrated using monocyte chemotactic protein-1 data collected by the Collaborative Perinatal Project in an effort to assess the relationship between this chemokine and the risk of miscarriage. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. A method for optimizing the cosine response of solar UV diffusers

    NASA Astrophysics Data System (ADS)

    Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki

    2013-07-01

    Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.

  6. Impacts of uncertainties in weather and streamflow observations in calibration and evaluation of an elevation distributed HBV-model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.

    2012-04-01

    The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.

  7. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  8. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  9. Analysis and correction of gradient nonlinearity bias in apparent diffusion coefficient measurements.

    PubMed

    Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L

    2014-03-01

    Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.

  10. Conversion of radius of curvature to power (and vice versa)

    NASA Astrophysics Data System (ADS)

    Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.

    2015-09-01

    Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.

  11. Analysis and correction of gradient nonlinearity bias in ADC measurements

    PubMed Central

    Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.

    2013-01-01

    Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533

  12. Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.

    PubMed

    McShane, L M; Clark, L C; Combs, G F; Turnbull, B W

    1991-06-01

    Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.

  13. Development of real-time rotating waveplate Stokes polarimeter using multi-order retardation for ITER poloidal polarimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imazawa, R., E-mail: imazawa.ryota@jaea.go.jp; Kawano, Y.; Ono, T.

    The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000more » rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.« less

  14. Development of real-time rotating waveplate Stokes polarimeter using multi-order retardation for ITER poloidal polarimeter.

    PubMed

    Imazawa, R; Kawano, Y; Ono, T; Itami, K

    2016-01-01

    The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000 rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.

  15. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  16. Microenvironment Tracker (MicroTrac) | Science Inventory ...

    EPA Pesticide Factsheets

    Epidemiologic studies have shown associations between air pollution concentrations measured at central-site ambient monitors and adverse health outcomes. Using central-site concentrations as exposure surrogates, however, can lead to exposure errors due to time spent in various indoor and outdoor microenvironments (ME) with pollutant concentrations that can be substantially different from central-site concentrations. These exposure errors can introduce bias and incorrect confidence intervals in health effect estimates, which diminish the power of such studies to establish correct conclusions about the exposure and health effects association. The significance of this issue was highlighted in the National Research Council (NRC) Report “Research Priorities for Airborne Particulate Matter”, which recommends that EPA address exposure error in health studies. To address this limitation, we developed MicroTrac, an automated classification model that estimates time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from personal global positioning system (GPS) data and geocoded boundaries of buildings (e.g., home, work, school). MicroTrac has several innovative design features: (1) using GPS signal quality to account for GPS signal loss inside certain buildings, (2) spatial buffering of building boundaries to account for the spatial inaccuracy of the GPS device, and (3) temporal buffering of GPS positi

  17. Sentence imitation as a marker of SLI in Czech: disproportionate impairment of verbs and clitics.

    PubMed

    Smolík, Filip; Vávru, Petra

    2014-06-01

    The authors examined sentence imitation as a potential clinical marker of specific language impairment (SLI) in Czech and its use to identify grammatical markers of SLI. Children with SLI and the age- and language-matched control groups (total N = 57) were presented with a sentence imitation task, a receptive vocabulary task, and digit span and nonword repetition tasks. Sentence imitations were scored for accuracy and error types. A separate count of inaccuracies for individual part-of-speech categories was performed. Children with SLI had substantially more inaccurate imitations than the control groups. The differences in the memory measures could not account for the differences between children with SLI and the control groups in imitation accuracy, even though they accounted for the differences between the language-matched and age-matched control groups. The proportion of grammatical errors was larger in children with SLI than in the control groups. The categories that were most affected in imitations of children with SLI were verbs and clitics. Sentence imitation is a sensitive marker of SLI. Verbs and clitics are the most vulnerable categories in Czech SLI. The pattern of errors suggests that impaired syntactic representations are the most likely source of difficulties in children with SLI.

  18. Quantitative measurement of mitochondrial membrane potential in cultured cells: calcium-induced de- and hyperpolarization of neuronal mitochondria

    PubMed Central

    Gerencser, Akos A; Chinopoulos, Christos; Birket, Matthew J; Jastroch, Martin; Vitelli, Cathy; Nicholls, David G; Brand, Martin D

    2012-01-01

    Mitochondrial membrane potential (ΔΨM) is a central intermediate in oxidative energy metabolism. Although ΔΨM is routinely measured qualitatively or semi-quantitatively using fluorescent probes, its quantitative assay in intact cells has been limited mostly to slow, bulk-scale radioisotope distribution methods. Here we derive and verify a biophysical model of fluorescent potentiometric probe compartmentation and dynamics using a bis-oxonol-type indicator of plasma membrane potential (ΔΨP) and the ΔΨM probe tetramethylrhodamine methyl ester (TMRM) using fluorescence imaging and voltage clamp. Using this model we introduce a purely fluorescence-based quantitative assay to measure absolute values of ΔΨM in millivolts as they vary in time in individual cells in monolayer culture. The ΔΨP-dependent distribution of the probes is modelled by Eyring rate theory. Solutions of the model are used to deconvolute ΔΨP and ΔΨM in time from the probe fluorescence intensities, taking into account their slow, ΔΨP-dependent redistribution and Nernstian behaviour. The calibration accounts for matrix:cell volume ratio, high- and low-affinity binding, activity coefficients, background fluorescence and optical dilution, allowing comparisons of potentials in cells or cell types differing in these properties. In cultured rat cortical neurons, ΔΨM is −139 mV at rest, and is regulated between −108 mV and −158 mV by concerted increases in ATP demand and Ca2+-dependent metabolic activation. Sensitivity analysis showed that the standard error of the mean in the absolute calibrated values of resting ΔΨM including all biological and systematic measurement errors introduced by the calibration parameters is less than 11 mV. Between samples treated in different ways, the typical equivalent error is ∼5 mV. PMID:22495585

  19. A First Look at the Navigation Design and Analysis for the Orion Exploration Mission 2

    NASA Technical Reports Server (NTRS)

    D'Souza, Chris D.; Zenetti, Renato

    2017-01-01

    This paper will detail the navigation and dispersion design and analysis of the first Orion crewed mission. The optical navigation measurement model will be described. The vehicle noise includes the residual acceleration from attitude deadbanding, attitude maneuvers, CO2 venting, wastewater venting, ammonia sublimator venting and solar radiation pressure. The maneuver execution errors account for the contribution of accelerometer scale-factor on the accuracy of the maneuver execution. Linear covariance techniques are used to obtain the navigation errors and the trajectory dispersions as well as the DV performance. Particular attention will be paid to the accuracy of the delivery at Earth Entry Interface and at the Lunar Flyby.

  20. A quantitative comparison of simultaneous BOLD fMRI and NIRS recordings during functional brain activation

    NASA Technical Reports Server (NTRS)

    Strangman, Gary; Culver, Joseph P.; Thompson, John H.; Boas, David A.; Sutton, J. P. (Principal Investigator)

    2002-01-01

    Near-infrared spectroscopy (NIRS) has been used to noninvasively monitor adult human brain function in a wide variety of tasks. While rough spatial correspondences with maps generated from functional magnetic resonance imaging (fMRI) have been found in such experiments, the amplitude correspondences between the two recording modalities have not been fully characterized. To do so, we simultaneously acquired NIRS and blood-oxygenation level-dependent (BOLD) fMRI data and compared Delta(1/BOLD) (approximately R(2)(*)) to changes in oxyhemoglobin, deoxyhemoglobin, and total hemoglobin concentrations derived from the NIRS data from subjects performing a simple motor task. We expected the correlation with deoxyhemoglobin to be strongest, due to the causal relation between changes in deoxyhemoglobin concentrations and BOLD signal. Instead we found highly variable correlations, suggesting the need to account for individual subject differences in our NIRS calculations. We argue that the variability resulted from systematic errors associated with each of the signals, including: (1) partial volume errors due to focal concentration changes, (2) wavelength dependence of this partial volume effect, (3) tissue model errors, and (4) possible spatial incongruence between oxy- and deoxyhemoglobin concentration changes. After such effects were accounted for, strong correlations were found between fMRI changes and all optical measures, with oxyhemoglobin providing the strongest correlation. Importantly, this finding held even when including scalp, skull, and inactive brain tissue in the average BOLD signal. This may reflect, at least in part, the superior contrast-to-noise ratio for oxyhemoglobin relative to deoxyhemoglobin (from optical measurements), rather than physiology related to BOLD signal interpretation.

  1. The Neural Basis of Error Detection: Conflict Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Yeung, Nick; Botvinick, Matthew M.; Cohen, Jonathan D.

    2004-01-01

    According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an…

  2. Optimization of Aimpoints for Coordinate Seeking Weapons

    DTIC Science & Technology

    2015-09-01

    aiming) and independent ( ballistic ) errors are taken into account, before utilizing each of the three damage functions representing the weapon. A Monte...characteristics such as the radius of the circle containing the weapon aimpoint, impact angle, dependent (aiming) and independent ( ballistic ) errors are taken...Dependent (Aiming) Error .................................8 2. Single Weapon Independent ( Ballistic ) Error .............................9 3

  3. Hydrologic Design in the Anthropocene

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.; Farmer, W. H.; Read, L.

    2014-12-01

    In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.

  4. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  5. Analysis of spatial correlation in predictive models of forest variables that use LiDAR auxiliary information

    Treesearch

    F. Mauro; Vicente J. Monleon; H. Temesgen; L.A. Ruiz

    2017-01-01

    Accounting for spatial correlation of LiDAR model errors can improve the precision of model-based estimators. To estimate spatial correlation, sample designs that provide close observations are needed, but their implementation might be prohibitively expensive. To quantify the gains obtained by accounting for the spatial correlation of model errors, we examined (

  6. Accounting for misclassification error in retrospective smoking data.

    PubMed

    Kenkel, Donald S; Lillard, Dean R; Mathios, Alan D

    2004-10-01

    Recent waves of major longitudinal surveys in the US and other countries include retrospective questions about the timing of smoking initiation and cessation, creating a potentially important but under-utilized source of information on smoking behavior over the life course. In this paper, we explore the extent of, consequences of, and possible solutions to misclassification errors in models of smoking participation that use data generated from retrospective reports. In our empirical work, we exploit the fact that the National Longitudinal Survey of Youth 1979 provides both contemporaneous and retrospective information about smoking status in certain years. We compare the results from four sets of models of smoking participation. The first set of results are from baseline probit models of smoking participation from contemporaneously reported information. The second set of results are from models that are identical except that the dependent variable is based on retrospective information. The last two sets of results are from models that take a parametric approach to account for a simple form of misclassification error. Our preliminary results suggest that accounting for misclassification error is important. However, the adjusted maximum likelihood estimation approach to account for misclassification does not always perform as expected. Copyright 2004 John Wiley & Sons, Ltd.

  7. Accounting for Relatedness in Family Based Genetic Association Studies

    PubMed Central

    McArdle, P.F.; O’Connell, J.R.; Pollin, T.I.; Baumgarten, M.; Shuldiner, A.R.; Peyser, P.A.; Mitchell, B.D.

    2007-01-01

    Objective Assess the differences in point estimates, power and type 1 error rates when accounting for and ignoring family structure in genetic tests of association. Methods We compare by simulation the performance of analytic models using variance components to account for family structure and regression models that ignore relatedness for a range of possible family based study designs (i.e., sib pairs vs. large sibships vs. nuclear families vs. extended families). Results Our analyses indicate that effect size estimates and power are not significantly affected by ignoring family structure. Type 1 error rates increase when family structure is ignored, as density of family structures increases, and as trait heritability increases. For discrete traits with moderate levels of heritability and across many common sampling designs, type 1 error rates rise from a nominal 0.05 to 0.11. Conclusion Ignoring family structure may be useful in screening although it comes at a cost of a increased type 1 error rate, the magnitude of which depends on trait heritability and pedigree configuration. PMID:17570925

  8. Assessment of the measurement performance of the in-vessel system of gap 6 of the ITER plasma position reflectometer using a finite-difference time-domain Maxwell full-wave code.

    PubMed

    da Silva, F; Heuraux, S; Ricardo, E; Quental, P; Ferreira, J

    2016-11-01

    We conducted a first assessment of the measurement performance of the in-vessel components at gap 6 of the ITER plasma position reflectometry with the aid of a synthetic Ordinary Mode (O-mode) broadband frequency-modulated continuous-wave reflectometer implemented with REFMUL, a 2D finite-difference time-domain full-wave Maxwell code. These simulations take into account the system location within the vacuum vessel as well as its access to the plasma. The plasma case considered is a baseline scenario from Fusion for Energy. We concluded that for the analyzed scenario, (i) the plasma curvature and non-equatorial position of the antenna have neglectable impact on the measurements; (ii) the cavity-like space surrounding the antenna can cause deflection and splitting of the probing beam; and (iii) multi-reflections on the blanket wall cause a substantial error preventing the system from operating within the required error margin.

  9. Levels of asymmetry in Formica pratensis Retz. (Hymenoptera, Insecta) from a chronic metal-contaminated site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabitsch, W.B.

    1997-07-01

    Asymmetries of bilaterally symmetrical morphological traits in workers of the ant Formica pratensis Retzius were compared at sites with different levels of metal contamination and between mature and pre-mature colonies. Statistical analyses of the right-minus-left differences revealed that their distributions fit assumptions of fluctuating asymmetry (FA). No direct asymmetry or antisymmetry were present. Mean measurement error accounts for a third of the variation, but the maximum measurement error was 65%. Although significant differences of FA in ants were observed, the inconsistent results render uncovering a clear pattern difficult. Lead, cadmium, and zinc concentrations in the ants decreased with the distancemore » from the contamination source, but no relation was found between FA and the heavy metal levels. Ants from the premature colonies were more asymmetrical than those from mature colonies but accumulated less metals. The use of asymmetry measures in ecotoxicology and biomonitoring is criticized, but should remain widely applicable if statistical assumptions are complemented by genetic and historical data.« less

  10. Improved memory for error feedback.

    PubMed

    Van der Borght, Liesbet; Schouppe, Nathalie; Notebaert, Wim

    2016-11-01

    Surprising feedback in a general knowledge test leads to an improvement in memory for both the surface features and the content of the feedback (Psychon Bull Rev 16:88-92, 2009). Based on the idea that in cognitive tasks, error is surprising (the orienting account, Cognition 111:275-279, 2009), we tested whether error feedback would be better remembered than correct feedback. Colored words were presented as feedback signals in a flanker task, where the color indicated the accuracy. Subsequently, these words were again presented during a recognition task (Experiment 1) or a lexical decision task (Experiments 2 and 3). In all experiments, memory was improved for words seen as error feedback. These results are compared to the attentional boost effect (J Exp Psychol Learn Mem Cogn 39:1223-12231, 2013) and related to the orienting account for post-error slowing (Cognition 111:275-279, 2009).

  11. Cognitive flexibility correlates with gambling severity in young adults.

    PubMed

    Leppink, Eric W; Redden, Sarah A; Chamberlain, Samuel R; Grant, Jon E

    2016-10-01

    Although gambling disorder (GD) is often characterized as a problem of impulsivity, compulsivity has recently been proposed as a potentially important feature of addictive disorders. The present analysis assessed the neurocognitive and clinical relationship between compulsivity on gambling behavior. A sample of 552 non-treatment seeking gamblers age 18-29 was recruited from the community for a study on gambling in young adults. Gambling severity levels included both casual and disordered gamblers. All participants completed the Intra/Extra-Dimensional Set Shift (IED) task, from which the total adjusted errors were correlated with gambling severity measures, and linear regression modeling was used to assess three error measures from the task. The present analysis found significant positive correlations between problems with cognitive flexibility and gambling severity (reflected by the number of DSM-5 criteria, gambling frequency, amount of money lost in the past year, and gambling urge/behavior severity). IED errors also showed a positive correlation with self-reported compulsive behavior scores. A significant correlation was also found between IED errors and non-planning impulsivity from the BIS. Linear regression models based on total IED errors, extra-dimensional (ED) shift errors, or pre-ED shift errors indicated that these factors accounted for a significant portion of the variance noted in several variables. These findings suggest that cognitive flexibility may be an important consideration in the assessment of gamblers. Results from correlational and linear regression analyses support this possibility, but the exact contributions of both impulsivity and cognitive flexibility remain entangled. Future studies will ideally be able to assess the longitudinal relationships between gambling, compulsivity, and impulsivity, helping to clarify the relative contributions of both impulsive and compulsive features. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Improving the twilight model for polar cap absorption nowcasts

    NASA Astrophysics Data System (ADS)

    Rogers, N. C.; Kero, A.; Honary, F.; Verronen, P. T.; Warrington, E. M.; Danskin, D. W.

    2016-11-01

    During solar proton events (SPE), energetic protons ionize the polar mesosphere causing HF radio wave attenuation, more strongly on the dayside where the effective recombination coefficient, αeff, is low. Polar cap absorption models predict the 30 MHz cosmic noise absorption, A, measured by riometers, based on real-time measurements of the integrated proton flux-energy spectrum, J. However, empirical models in common use cannot account for regional and day-to-day variations in the daytime and nighttime profiles of αeff(z) or the related sensitivity parameter, m=A>/&sqrt;J. Large prediction errors occur during twilight when m changes rapidly, and due to errors locating the rigidity cutoff latitude. Modeling the twilight change in m as a linear or Gauss error-function transition over a range of solar-zenith angles (χl < χ < χu) provides a better fit to measurements than selecting day or night αeff profiles based on the Earth-shadow height. Optimal model parameters were determined for several polar cap riometers for large SPEs in 1998-2005. The optimal χl parameter was found to be most variable, with smaller values (as low as 60°) postsunrise compared with presunset and with positive correlation between riometers over a wide area. Day and night values of m exhibited higher correlation for closely spaced riometers. A nowcast simulation is presented in which rigidity boundary latitude and twilight model parameters are optimized by assimilating age-weighted measurements from 25 riometers. The technique reduces model bias, and root-mean-square errors are reduced by up to 30% compared with a model employing no riometer data assimilation.

  13. Dynamic characterization of Galfenol

    NASA Astrophysics Data System (ADS)

    Scheidler, Justin J.; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.

    2015-04-01

    A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.

  14. Dynamic Characterization of Galfenol

    NASA Technical Reports Server (NTRS)

    Scheidler, Justin; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.

    2015-01-01

    A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.

  15. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  16. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  17. Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Ely, Jeffry W.

    2012-01-01

    A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.

  18. Analysis of imperfections in the coherent optical excitation of single atoms to Rydberg states

    NASA Astrophysics Data System (ADS)

    de Léséleuc, Sylvain; Barredo, Daniel; Lienhard, Vincent; Browaeys, Antoine; Lahaye, Thierry

    2018-05-01

    We study experimentally various physical limitations and technical imperfections that lead to damping and finite contrast of optically driven Rabi oscillations between ground and Rydberg states of a single atom. Finite contrast is due to preparation and detection errors, and we show how to model and measure them accurately. Part of these errors originates from the finite lifetime of Rydberg states, and we observe its n3 scaling with the principal quantum number n . To explain the damping of Rabi oscillations, we use simple numerical models taking into account independently measured experimental imperfections and show that the observed damping actually results from the accumulation of several small effects, each at the level of a few percent. We discuss prospects for improving the coherence of ground-Rydberg Rabi oscillations in view of applications in quantum simulation and quantum information processing with arrays of single Rydberg atoms.

  19. Residential magnetic fields predicted from wiring configurations: II. Relationships To childhood leukemia.

    PubMed

    Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M

    1999-10-01

    Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.

  20. On the Ability of Space- Based Passive and Active Remote Sensing Observations of CO2 to Detect Flux Perturbations to the Carbon Cycle

    NASA Technical Reports Server (NTRS)

    Crowell, Sean M. R.; Kawa, S. Randolph; Browell, Edward V.; Hammerling, Dorit M.; Moore, Berrien; Schaefer, Kevin; Doney, Scott C.

    2018-01-01

    Space-borne observations of CO2 are vital to gaining understanding of the carbon cycle in regions of the world that are difficult to measure directly, such as the tropical terrestrial biosphere, the high northern and southern latitudes, and in developing nations such as China. Measurements from passive instruments such as GOSAT (Greenhouse Gases Observing Satellite) and OCO-2 (Orbiting Carbon Observatory 2), however, are constrained by solar zenith angle limitations as well as sensitivity to the presence of clouds and aerosols. Active measurements such as those in development for the Active Sensing of CO2 Emissions over Nights, Days and Seasons (ASCENDS) mission show strong potential for making measurements in the high-latitude winter and in cloudy regions. In this work we examine the enhanced flux constraint provided by the improved coverage from an active measurement such as ASCENDS. The simulation studies presented here show that with sufficient precision, ASCENDS will detect permafrost thaw and fossil fuel emissions shifts at annual and seasonal time scales, even in the presence of transport errors, representativeness errors, and biogenic flux errors. While OCO-2 can detect some of these perturbations at the annual scale, the seasonal sampling provided by ASCENDS provides the stronger constraint. Plain Language Summary: Active and passive remote sensors show the potential to provide unprecedented information on the carbon cycle. With the all-season sampling, active remote sensors are more capable of constraining high-latitude emissions. The reduced sensitivity to cloud and aerosol also makes active sensors more capable of providing information in cloudy and polluted scenes with sufficient accuracy. These experiments account for errors that are fundamental to the top-down approach for constraining emissions, and even including these sources of error, we show that satellite remote sensors are critical for understanding the carbon cycle.

  1. Operator Variability in Scan Positioning is a Major Component of HR-pQCT Precision Error and is Reduced by Standardized Training

    PubMed Central

    Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.

    2017-01-01

    Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931

  2. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  3. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.

  4. Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations.

    PubMed

    Tornøe, Christoffer W; Overgaard, Rune V; Agersø, Henrik; Nielsen, Henrik A; Madsen, Henrik; Jonsson, E Niclas

    2005-08-01

    The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise accounts for model misspecifications, the SDEs provide a diagnostic tool for model appropriateness. The focus of the article is on the implementation of the Extended Kalman Filter (EKF) in NONMEM for parameter estimation in SDE models. Various applications of SDEs in population PK/PD modeling are illustrated through a systematic model development example using clinical PK data of the gonadotropin releasing hormone (GnRH) antagonist degarelix. The dynamic noise estimates were used to track variations in model parameters and systematically build an absorption model for subcutaneously administered degarelix. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained by tracking unexplained variations in parameters.

  5. Photothermal measurement of absorption and scattering losses in thin films excited by surface plasmons.

    PubMed

    Domené, Esteban A; Balzarotti, Francisco; Bragas, Andrea V; Martínez, Oscar E

    2009-12-15

    We present a novel noncontact, photothermal technique, based on the focus error signal of a commercial CD pickup head that allows direct determination of absorption in thin films. Combined with extinction methods, this technique yields the scattering contribution to the losses. Surface plasmon polaritons are excited using the Kretschmann configuration in thin Au films of varying thickness. By measuring the extinction and absorption simultaneously, it is shown that dielectric constants and thickness retrieval leads to inconsistencies if the model does not account for scattering.

  6. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    NASA Astrophysics Data System (ADS)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate the energy resource flows measurement imbalances, and to filter invalid measurements at the data acquisition and processing stage in performing monitoring of an automated energy resource monitoring and accounting system.

  7. Characterization of a 6 kW high-flux solar simulator with an array of xenon arc lamps capable of concentrations of nearly 5000 suns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, Robert; Bush, Evan; Loutzenhiser, Peter, E-mail: peter.loutzenhiser@me.gatech.edu

    2015-12-15

    A systematic methodology for characterizing a novel and newly fabricated high-flux solar simulator is presented. The high-flux solar simulator consists of seven xenon short-arc lamps mounted in truncated ellipsoidal reflectors. Characterization of spatial radiative heat flux distribution was performed using calorimetric measurements of heat flow coupled with CCD camera imaging of a Lambertian target mounted in the focal plane. The calorimetric measurements and images of the Lambertian target were obtained in two separate runs under identical conditions. Detailed modeling in the high-flux solar simulator was accomplished using Monte Carlo ray tracing to capture radiative heat transport. A least-squares regression modelmore » was used on the Monte Carlo radiative heat transfer analysis with the experimental data to account for manufacturing defects. The Monte Carlo ray tracing was calibrated by regressing modeled radiative heat flux as a function of specular error and electric power to radiation conversion onto measured radiative heat flux from experimental results. Specular error and electric power to radiation conversion efficiency were 5.92 ± 0.05 mrad and 0.537 ± 0.004, respectively. An average radiative heat flux with 95% errors bounds of 4880 ± 223 kW ⋅ m{sup −2} was measured over a 40 mm diameter with a cavity-type calorimeter with an apparent absorptivity of 0.994. The Monte Carlo ray-tracing resulted in an average radiative heat flux of 893.3 kW ⋅ m{sup −2} for a single lamp, comparable to the measured radiative heat fluxes with 95% error bounds of 892.5 ± 105.3 kW ⋅ m{sup −2} from calorimetry.« less

  8. VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)

    NASA Astrophysics Data System (ADS)

    Watkins, L. L.; van der Marel, R. P.

    2017-11-01

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).

  9. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less

  10. Accuracy of a Basketball Indoor Tracking System Based on Standard Bluetooth Low Energy Channels (NBN23®).

    PubMed

    Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime

    2018-06-14

    The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.

  11. The Pot Calling the Kettle Black? A Comparison of Measures of Current Tobacco Use

    PubMed Central

    ROSENMAN, ROBERT

    2014-01-01

    Researchers often use the discrepancy between self-reported and biochemically assessed active smoking status to argue that self-reported smoking status is not reliable, ignoring the limitations of biochemically assessed measures and treating it as the gold standard in their comparisons. Here, we employ econometric techniques to compare the accuracy of self-reported and biochemically assessed current tobacco use, taking into account measurement errors with both methods. Our approach allows estimating and comparing the sensitivity and specificity of each measure without directly observing true smoking status. The results, robust to several alternative specifications, suggest that there is no clear reason to think that one measure dominates the other in accuracy. PMID:25587199

  12. Attention in the predictive mind.

    PubMed

    Ransom, Madeleine; Fazelpour, Sina; Mole, Christopher

    2017-01-01

    It has recently become popular to suggest that cognition can be explained as a process of Bayesian prediction error minimization. Some advocates of this view propose that attention should be understood as the optimization of expected precisions in the prediction-error signal (Clark, 2013, 2016; Feldman & Friston, 2010; Hohwy, 2012, 2013). This proposal successfully accounts for several attention-related phenomena. We claim that it cannot account for all of them, since there are certain forms of voluntary attention that it cannot accommodate. We therefore suggest that, although the theory of Bayesian prediction error minimization introduces some powerful tools for the explanation of mental phenomena, its advocates have been wrong to claim that Bayesian prediction error minimization is 'all the brain ever does'. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  14. A bi-articular model for scapular-humeral rhythm reconstruction through data from wearable sensors.

    PubMed

    Lorussi, Federico; Carbonaro, Nicola; De Rossi, Danilo; Tognetti, Alessandro

    2016-04-23

    Patient-specific performance assessment of arm movements in daily life activities is fundamental for neurological rehabilitation therapy. In most applications, the shoulder movement is simplified through a socket-ball joint, neglecting the movement of the scapular-thoracic complex. This may lead to significant errors. We propose an innovative bi-articular model of the human shoulder for estimating the position of the hand in relation to the sternum. The model takes into account both the scapular-toracic and gleno-humeral movements and their ratio governed by the scapular-humeral rhythm, fusing the information of inertial and textile-based strain sensors. To feed the reconstruction algorithm based on the bi-articular model, an ad-hoc sensing shirt was developed. The shirt was equipped with two inertial measurement units (IMUs) and an integrated textile strain sensor. We built the bi-articular model starting from the data obtained in two planar movements (arm abduction and flexion in the sagittal plane) and analysing the error between the reference data - measured through an optical reference system - and the socket-ball approximation of the shoulder. The 3D model was developed by extending the behaviour of the kinematic chain revealed in the planar trajectories through a parameter identification that takes into account the body structure of the subject. The bi-articular model was evaluated in five subjects in comparison with the optical reference system. The errors were computed in terms of distance between the reference position of the trochlea (end-effector) and the correspondent model estimation. The introduced method remarkably improved the estimation of the position of the trochlea (and consequently the estimation of the hand position during reaching activities) reducing position errors from 11.5 cm to 1.8 cm. Thanks to the developed bi-articular model, we demonstrated a reliable estimation of the upper arm kinematics with a minimal sensing system suitable for daily life monitoring of recovery.

  15. Comparing Different Accounts of Inversion Errors in Children's Non-Subject Wh-Questions: "What Experimental Data Can Tell Us?"

    ERIC Educational Resources Information Center

    Ambridge, Ben; Rowland, Caroline F.; Theakston, Anna L.; Tomasello, Michael

    2006-01-01

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words ("what," "who," "how" and "why"), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6-4;6. Rates of non-inversion error ("Who…

  16. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  17. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  18. Trauma center maturity measured by an analysis of preventable and potentially preventable deaths: there is always something to be learned….

    PubMed

    Matsumoto, Shokei; Jung, Kyoungwon; Smith, Alan; Coimbra, Raul

    2018-06-23

    To establish the preventable and potentially preventable death rates in a mature trauma center and to identify the causes of death and highlight the lessons learned from these cases. We analyzed data from a Level-1 Trauma Center Registry, collected over a 15-year period. Data on demographics, timing of death, and potential errors were collected. Deaths were judged as preventable (PD), potentially preventable (PPD), or non-preventable (NPD), following a strict external peer-review process. During the 15-year period, there were 874 deaths, 15 (1.7%) and 6 (0.7%) of which were considered PPDs and PDs, respectively. Patients in the PD and PPD groups were not sicker and had less severe head injury than those in the NPD group. The time-death distribution differed according to preventability. We identified 21 errors in the PD and PPD groups, but only 61 (7.3%) errors in the NPD group (n = 853). Errors in judgement accounted for the majority and for 90.5% of the PD and PPD group errors. Although the numbers of PDs and PPDs were low, denoting maturity of our trauma center, there are important lessons to be learned about how errors in judgment led to deaths that could have been prevented.

  19. Novel approximation of misalignment fading modeled by Beckmann distribution on free-space optical links.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2016-10-03

    A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.

  20. A new model of Ishikawa diagram for quality assessment

    NASA Astrophysics Data System (ADS)

    Liliana, Luca

    2016-11-01

    The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy.

  1. Effects of Correlated and Uncorrelated Gamma Rays on Neutron Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowles, Christian C.; Behling, Richard S.; Imel, George R.

    Neutron multiplicity counting relies on time correlation between neutron events to assay the fissile mass, (α,n) to spontaneous fission neutron ratio, and neutron self-multiplication of samples. Gamma-ray sensitive neutron multiplicity counters may misidentify gamma rays as neutrons and therefore miscalculate sample characteristics. Time correlated and uncorrelated gamma-ray-like signals were added into gamma-ray free neutron multiplicity counter data to examine the effects of gamma ray signals being misidentified as neutron signals on assaying sample characteristics. Multiplicity counter measurements with and without gamma-ray-like signals were compared to determine the assay error associated with gamma-ray-like signals at various gamma-ray and neutron rates. Correlatedmore » and uncorrelated gamma-ray signals each produced consistent but different measurement errors. Correlated gamma-ray signals most strongly led to fissile mass overestimates, whereas uncorrelated gamma-ray signals most strongly lead to (α,n) neutron overestimates. Gamma-ray sensitive neutron multiplicity counters may be able to account for the effects of gamma-rays on measurements to mitigate measurement uncertainties.« less

  2. Modeling the influence of LASIK surgery on optical properties of the human eye

    NASA Astrophysics Data System (ADS)

    Szul-Pietrzak, Elżbieta; Hachoł, Andrzej; Cieślak, Krzysztof; Drożdż, Ryszard; Podbielska, Halina

    2011-11-01

    The aim was to model the influence of LASIK surgery on the optical parameters of the human eye and to ascertain which factors besides the central corneal radius of curvature and central thickness play the major role in postsurgical refractive change. Ten patients were included in the study. Pre- and postsurgical measurements included standard refraction, anterior corneal curvature and pachymetry. The optical model used in the analysis was based on the Le Grand and El Hage schematic eye, modified by the measured individual parameters of corneal geometry. A substantial difference between eye refractive error measured after LASIK and estimated from the eye model was observed. In three patients, full correction of the refractive error was achieved. However, analysis of the visual quality in terms of spot diagrams and optical transfer functions of the eye optical system revealed some differences in these measurements. This suggests that other factors besides corneal geometry may play a major role in postsurgical refraction. In this paper we investigated whether the biomechanical properties of the eyeball and changes in intraocular pressure could account for the observed discrepancies.

  3. Accurate characterisation of hole size and location by projected fringe profilometry

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.

    2018-06-01

    The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.

  4. Antisaccade and smooth pursuit eye movements in healthy subjects receiving sertraline and lorazepam.

    PubMed

    Green, J F; King, D J; Trimble, K M

    2000-03-01

    Patients suffering from some psychiatric and neurological disorders demonstrate abnormally high levels of saccadic distractibility when carrying out the antisaccade task. This has been particularly thoroughly demonstrated in patients with schizophrenia. A large body of evidence has been accumulated from studies of patients which suggests that such eye movement abnormalities may arise from frontal lobe dysfunction. The psychopharmacology of saccadic distractibility is less well understood, but is relevant both to interpreting patient studies and to establishing the neurological basis of their findings. Twenty healthy subjects received lorazepam 0.5 mg, 1 mg and 2 mg, sertraline 50 mg and placebo in a balanced, repeated measures study design. Antisaccade, no-saccade, visually guided saccade and smooth pursuit tasks were carried out and the effects of practice and drugs measured. Lorazepam increased direction errors in the antisaccade and no-saccade tasks in a dose-dependent manner. Sertraline had no effect on these measures. Correlation showed a statistically significant, but rather weak, association between direction errors and smooth pursuit measures. Practice was shown to have a powerful effect on antisaccade direction errors. This study supports our previous work by confirming that lorazepam reliably worsens saccadic distractibility, in contrast to other psychotropic drugs such as sertraline and chlorpromazine. Our results also suggest that other studies in this field, particularly those using parallel groups design, should take account of practice effects.

  5. Accounting for Errors in Low Coverage High-Throughput Sequencing Data When Constructing Genetic Maps Using Biparental Outcrossed Populations

    PubMed Central

    Bilton, Timothy P.; Schofield, Matthew R.; Black, Michael A.; Chagné, David; Wilcox, Phillip L.; Dodds, Ken G.

    2018-01-01

    Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species’ genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology (e.g., genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander–Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model. PMID:29487138

  6. Accounting for Errors in Low Coverage High-Throughput Sequencing Data When Constructing Genetic Maps Using Biparental Outcrossed Populations.

    PubMed

    Bilton, Timothy P; Schofield, Matthew R; Black, Michael A; Chagné, David; Wilcox, Phillip L; Dodds, Ken G

    2018-05-01

    Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species' genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology ( e.g. , genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander-Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model. Copyright © 2018 Bilton et al.

  7. Pilot-controller communication errors : an analysis of Aviation Safety Reporting System (ASRS) reports

    DOT National Transportation Integrated Search

    1998-08-01

    The purpose of this study was to identify the factors that contribute to pilot-controller communication errors. Resports submitted to the Aviation Safety Reporting System (ASRS) offer detailed accounts of specific types of errors and a great deal of ...

  8. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  9. Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors

    NASA Astrophysics Data System (ADS)

    Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.

    2013-03-01

    Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. The effectiveness of risk management program on pediatric nurses' medication error.

    PubMed

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  11. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  12. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  13. Taking error into account when fitting models using Approximate Bayesian Computation.

    PubMed

    van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M

    2018-03-01

    Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.

  14. Some Deep Structure Manifestations in Second Language Errors of English Voiced and Voiceless "th."

    ERIC Educational Resources Information Center

    Moustafa, Margaret Heiss

    Native speakers of Egyptian Arabic make errors in their pronunciation of English that cannot always be accounted for by a contrastive analysis of Egyptian analysis of Egyptain Arabic and English. This study focuses on three types of errors in the pronunciation of voiced and voiceless "th" made by fluent speakers of English. These errors were noted…

  15. [The surgeon and deontology].

    PubMed

    Sucila, Antanas

    2002-01-01

    The aim of study is to recall surgeons deontological principles and errors. The article demonstrates some specific deontological errors, performed by surgeon on patients and his colleagues; points out painful sequela of these errors as well. CONCLUSION. The surgeon should take in account deontological principles rigorously in routine daily practice.

  16. Analysis and calibration of Safecasta data relative to the 2011 Fukushima Daiichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Hultquist, C.

    2017-12-01

    Citizen-led movements producing scientific hazard data during disasters are increasingly common. After the Japanese earthquake-triggered tsunami in 2011, and the resulting radioactive releases at the damaged Fukushima Daiichi nuclear power plants, citizens monitored on-ground levels of radiation with innovative mobile devices built from off-the-shelf components. To date, the citizen-led Safecast project has recorded 50 million radiation measurements world- wide, with the majority of these measurements from Japan. A robust methodology is presented to calibrate contributed Safecast radiation measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using official observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the official and contributed datasets at specific time windows and at corresponding spatial locations. The coefficients found are aggregated and interpolated using cubic and linear methods to generate time dependent calibration function. Normal background radiation, decay rates and missing values are taken into account during the analysis. Results show that the official Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing ratio with time. The new time dependent calibration function takes into account the presence of different Cesium isotopes, and minimizes the error between official and contributed data. This time dependent Safecast calibration function is necessary until 2030, after which date the error caused by the isotopes ratio will become negligible.

  17. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  18. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  19. Ionospheric range-rate effects in satellite-to-satellite tracking

    NASA Technical Reports Server (NTRS)

    Lipofsky, J. R.; Bent, R. B.; Llewellyn, S. K.; Schmid, P. E.

    1977-01-01

    Investigation of ionospheric range and range-rate corrections in satellite-to-satellite tracking were investigated. Major problems were cited and the magnitude of errors that have to be considered for communications between satellites and related experiments was defined. The results point to the need of using a sophisticated modeling approach incorporating daily solar data, and where possible actual ionospheric measurements as update information, as a simple median model cannot possibly account for the complex interaction of the many variables. The findings provide a basis from which the residual errors can be estimated after ionospheric modeling is incorporated in the reduction. Simulations were performed for satellites at various heights: Apollo, Geos, and Nimbus tracked by ATS-6; and in two different geometric configurations: coplanar and perpendicular orbits.

  20. Accuracy of measurement of star images on a pixel array

    NASA Technical Reports Server (NTRS)

    King, I. R.

    1983-01-01

    Algorithms are developed for predicting the accuracy with which the brightness of a star can be determined from its image on a digital detector array, as a function of the brightness of the background. The assumption is made that a known profile is being fitted by least squares. The two profiles used correspond to ST images and to ground-based observations. The first result is an approximate rule of thumb for equivalent noise area. More rigorous results are then given in tabular form. The size of the pixels, relative to the image size, is taken into account. Astronometric accuracy is also discussed briefly; the error, relative to image size, is very similar to the photometric error relative to brightness.

  1. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    NASA Technical Reports Server (NTRS)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  2. Photon migration through a turbid slab described by a model based on diffusion approximation. II. Comparison with Monte Carlo results.

    PubMed

    Martelli, F; Contini, D; Taddeucci, A; Zaccanti, G

    1997-07-01

    In our companion paper we presented a model to describe photon migration through a diffusing slab. The model, developed for a homogeneous slab, is based on the diffusion approximation and is able to take into account reflection at the boundaries resulting from the refractive index mismatch. In this paper the predictions of the model are compared with solutions of the radiative transfer equation obtained by Monte Carlo simulations in order to determine the applicability limits of the approximated theory in different physical conditions. A fitting procedure, carried out with the optical properties as fitting parameters, is used to check the application of the model to the inverse problem. The results show that significant errors can be made if the effect of the refractive index mismatch is not properly taken into account. Errors are more important when measurements of transmittance are used. The effects of using a receiver with a limited angular field of view and the angular distribution of the radiation that emerges from the slab have also been investigated.

  3. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2007-01-01

    Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.

  4. Quality improvement of diagnosis of the electromyography data based on statistical characteristics of the measured signals

    NASA Astrophysics Data System (ADS)

    Selivanova, Karina G.; Avrunin, Oleg G.; Zlepko, Sergii M.; Romanyuk, Sergii O.; Zabolotna, Natalia I.; Kotyra, Andrzej; Komada, Paweł; Smailova, Saule

    2016-09-01

    Research and systematization of motor disorders, taking into account the clinical and neurophysiologic phenomena, are important and actual problem of neurology. The article describes a technique for decomposing surface electromyography (EMG), using Principal Component Analysis. The decomposition is achieved by a set of algorithms that uses a specially developed for analyze EMG. The accuracy was verified by calculation of Mahalanobis distance and Probability error.

  5. "Fragment errors" in deep dysgraphia: further support for a lexical hypothesis.

    PubMed

    Bormann, Tobias; Wallesch, Claus-W; Blanken, Gerhard

    2008-07-01

    In addition to various lexical errors, the writing of patients with deep dysgraphia may include a large number of segmental spelling errors, which increase towards the end of the word. Frequently, these errors involve deletion of two or more letters resulting in so-called "fragment errors". Different positions have been brought forward regarding their origin, including rapid decay of activation in the graphemic buffer and an impairment of more central (i.e., lexical or semantic) processing. We present data from a patient (M.D.) with deep dysgraphia who showed an increase of segmental spelling errors towards the end of the word. Several tasks were carried out to explore M.D.'s underlying functional impairment. Errors affected word-final positions in tasks like backward spelling and fragment completion. In a delayed copying task, length of the delay had no influence. In addition, when asked to recall three serially presented letters, a task which had not been carried out before, M.D. exhibited a preference for the first and the third letter and poor performance for the second letter. M.D.'s performance on these tasks contradicts the rapid decay account and instead supports a lexical-semantic account of segmental errors in deep dysgraphia. In addition, the results fit well with an implemented computational model of deep dysgraphia and segmental spelling errors.

  6. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.

    PubMed

    Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  7. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  8. Identifiability Of Systems With Modeling Errors

    NASA Technical Reports Server (NTRS)

    Hadaegh, Yadolah " fred" ; Bekey, George A.

    1988-01-01

    Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.

  9. Preventability of early vs. late readmissions in an academic medical center

    PubMed Central

    Graham, Kelly L.; Dike, Ogechi; Doctoroff, Lauren; Jupiter, Marisa; Vanka, Anita

    2017-01-01

    Background It is unclear if the 30-day unplanned hospital readmission rate is a plausible accountability metric. Objective Compare preventability of hospital readmissions, between an early period [0–7 days post-discharge] and a late period [8–30 days post-discharge]. Compare causes of readmission, and frequency of markers of clinical instability 24h prior to discharge between early and late readmissions. Design, setting, patients 120 patient readmissions in an academic medical center between 1/1/2009-12/31/2010 Measures Sum-score based on a standard algorithm that assesses preventability of each readmission based on blinded hospitalist review; average causation score for seven types of adverse events; rates of markers of clinical instability within 24h prior to discharge. Results Readmissions were significantly more preventable in the early compared to the late period [median preventability sum score 8.5 vs. 8.0, p = 0.03]. There were significantly more management errors as causative events for the readmission in the early compared to the late period [mean causation score [scale 1–6, 6 most causal] 2.0 vs. 1.5, p = 0.04], and these errors were significantly more preventable in the early compared to the late period [mean preventability score 1.9 vs 1.5, p = 0.03]. Patients readmitted in the early period were significantly more likely to have mental status changes documented 24h prior to hospital discharge than patients readmitted in the late period [12% vs. 0%, p = 0.01]. Conclusions Readmissions occurring in the early period were significantly more preventable. Early readmissions were associated with more management errors, and mental status changes 24h prior to discharge. Seven-day readmissions may be a better accountability measure. PMID:28622384

  10. Accounting for Dependence Induced by Weighted KNN Imputation in Paired Samples, Motivated by a Colorectal Cancer Study

    PubMed Central

    Suyundikov, Anvar; Stevens, John R.; Corcoran, Christopher; Herrick, Jennifer; Wolff, Roger K.; Slattery, Martha L.

    2015-01-01

    Missing data can arise in bioinformatics applications for a variety of reasons, and imputation methods are frequently applied to such data. We are motivated by a colorectal cancer study where miRNA expression was measured in paired tumor-normal samples of hundreds of patients, but data for many normal samples were missing due to lack of tissue availability. We compare the precision and power performance of several imputation methods, and draw attention to the statistical dependence induced by K-Nearest Neighbors (KNN) imputation. This imputation-induced dependence has not previously been addressed in the literature. We demonstrate how to account for this dependence, and show through simulation how the choice to ignore or account for this dependence affects both power and type I error rate control. PMID:25849489

  11. Open-ocean boundary conditions from interior data: Local and remote forcing of Massachusetts Bay

    USGS Publications Warehouse

    Bogden, P.S.; Malanotte-Rizzoli, P.; Signell, R.

    1996-01-01

    Massachusetts and Cape Cod Bays form a semienclosed coastal basin that opens onto the much larger Gulf of Maine. Subtidal circulation in the bay is driven by local winds and remotely driven flows from the gulf. The local-wind forced flow is estimated with a regional shallow water model driven by wind measurements. The model uses a gravity wave radiation condition along the open-ocean boundary. Results compare reasonably well with observed currents near the coast. In some offshore regions however, modeled flows are an order of magnitude less energetic than the data. Strong flows are observed even during periods of weak local wind forcing. Poor model-data comparisons are attributable, at least in part, to open-ocean boundary conditions that neglect the effects of remote forcing. Velocity measurements from within Massachusetts Bay are used to estimate the remotely forced component of the flow. The data are combined with shallow water dynamics in an inverse-model formulation that follows the theory of Bennett and McIntosh [1982], who considered tides. We extend their analysis to consider the subtidal response to transient forcing. The inverse model adjusts the a priori open-ocean boundary condition, thereby minimizing a combined measure of model-data misfit and boundary condition adjustment. A "consistency criterion" determines the optimal trade-off between the two. The criterion is based on a measure of plausibility for the inverse solution. The "consistent" inverse solution reproduces 56% of the average squared variation in the data. The local-wind-driven flow alone accounts for half of the model skill. The other half is attributable to remotely forced flows from the Gulf of Maine. The unexplained 44% comes from measurement errors and model errors that are not accounted for in the analysis. 

  12. A systematic review of the measurement properties of the European Organisation for Research and Treatment of Cancer In-patient Satisfaction with Care Questionnaire, the EORTC IN-PATSAT32.

    PubMed

    Neijenhuijs, Koen I; Jansen, Femke; Aaronson, Neil K; Brédart, Anne; Groenvold, Mogens; Holzner, Bernhard; Terwee, Caroline B; Cuijpers, Pim; Verdonck-de Leeuw, Irma M

    2018-05-07

    The EORTC IN-PATSAT32 is a patient-reported outcome measure (PROM) to assess cancer patients' satisfaction with in-patient health care. The aim of this study was to investigate whether the initial good measurement properties of the IN-PATSAT32 are confirmed in new studies. Within the scope of a larger systematic review study (Prospero ID 42017057237), a systematic search was performed of Embase, Medline, PsycINFO, and Web of Science for studies that investigated measurement properties of the IN-PATSAT32 up to July 2017. Study quality was assessed, data were extracted, and synthesized according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology. Nine studies were included in this review. The evidence on reliability and construct validity were rated as sufficient and of the quality of the evidence as moderate. The evidence on structural validity was rated as insufficient and of low quality. The evidence on internal consistency was indeterminate. Measurement error, responsiveness, criterion validity, and cross-cultural validity were not reported in the included studies. Measurement error could be calculated for two studies and was judged indeterminate. In summary, the IN-PATSAT32 performs as expected with respect to reliability and construct validity. No firm conclusions can be made yet whether the IN-PATSAT32 also performs as well with respect to structural validity and internal consistency. Further research on these measurement properties of the PROM is therefore needed as well as on measurement error, responsiveness, criterion validity, and cross-cultural validity. For future studies, it is recommended to take the COSMIN methodology into account.

  13. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  14. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.

    2003-11-01

    A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.

  15. Absolute and proportional measures of potential markers of rehearsal, and their implications for accounts of its development

    PubMed Central

    Jarrold, Christopher; Danielsson, Henrik; Wang, Xiaoli

    2015-01-01

    Previous studies of the development of phonological similarity and word length effects in children have shown that these effects are small or absent in young children, particularly when measured using visual presentation of the memoranda. This has often been taken as support for the view that young children do not rehearse. The current paper builds on recent evidence that instead suggests that absent phonological similarity and word length effects in young children reflects the same proportional cost of these effects in children of all ages. Our aims are to explore the conditions under which this proportional scaling account can reproduce existing developmental data, and in turn suggest ways that future studies might measure and model phonological similarity and word length effects in children. To that end, we first fit a single mathematical function through previously reported data that simultaneously captures absent and negative proportional effects of phonological similarity in young children plus constant proportional similarity effects in older children. This developmental function therefore provides the benchmark that we seek to re-produce in a series of subsequent simulations that test the proportional scaling account. These simulations reproduce the developmental function well, provided that they take into account the influence of floor effects and of measurement error. Our simulations suggest that future empirical studies examining these effects in the context of the development of rehearsal need to take into account proportional scaling. They also provide a demonstration of how proportional costs can be explored, and of the possible developmental functions associated with such an analysis. PMID:25852615

  16. Parametric decadal climate forecast recalibration (DeFoReSt 1.0)

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe

    2018-01-01

    Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.

  17. Classical vs. evolved quenching parameters and procedures in scintillation measurements: The importance of ionization quenching

    NASA Astrophysics Data System (ADS)

    Bagán, H.; Tarancón, A.; Rauret, G.; García, J. F.

    2008-07-01

    The quenching parameters used to model detection efficiency variations in scintillation measurements have not evolved since the decade of 1970s. Meanwhile, computer capabilities have increased enormously and ionization quenching has appeared in practical measurements using plastic scintillation. This study compares the results obtained in activity quantification by plastic scintillation of 14C samples that contain colour and ionization quenchers, using classical (SIS, SCR-limited, SCR-non-limited, SIS(ext), SQP(E)) and evolved (MWA-SCR and WDW) parameters and following three calibration approaches: single step, which does not take into account the quenching mechanism; two steps, which takes into account the quenching phenomena; and multivariate calibration. Two-step calibration (ionization followed by colour) yielded the lowest relative errors, which means that each quenching phenomenon must be specifically modelled. In addition, the sample activity was quantified more accurately when the evolved parameters were used. Multivariate calibration-PLS also yielded better results than those obtained using classical parameters, which confirms that the quenching phenomena must be taken into account. The detection limits for each calibration method and each parameter were close to those obtained theoretically using the Currie approach.

  18. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  19. From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardi, Marcie L.

    2012-03-01

    Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at themore » Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.« less

  20. Error analysis and assessment of unsteady forces acting on a flapping wing micro air vehicle: free flight versus wind-tunnel experimental methods.

    PubMed

    Caetano, J V; Percin, M; van Oudheusden, B W; Remes, B; de Wagter, C; de Croon, G C H E; de Visser, C C

    2015-08-20

    An accurate knowledge of the unsteady aerodynamic forces acting on a bio-inspired, flapping-wing micro air vehicle (FWMAV) is crucial in the design development and optimization cycle. Two different types of experimental approaches are often used: determination of forces from position data obtained from external optical tracking during free flight, or direct measurements of forces by attaching the FWMAV to a force transducer in a wind-tunnel. This study compares the quality of the forces obtained from both methods as applied to a 17.4 gram FWMAV capable of controlled flight. A comprehensive analysis of various error sources is performed. The effects of different factors, e.g., measurement errors, error propagation, numerical differentiation, filtering frequency selection, and structural eigenmode interference, are assessed. For the forces obtained from free flight experiments it is shown that a data acquisition frequency below 200 Hz and an accuracy in the position measurements lower than ± 0.2 mm may considerably hinder determination of the unsteady forces. In general, the force component parallel to the fuselage determined by the two methods compares well for identical flight conditions; however, a significant difference was observed for the forces along the stroke plane of the wings. This was found to originate from the restrictions applied by the clamp to the dynamic oscillations observed in free flight and from the structural resonance of the clamped FWMAV structure, which generates loads that cannot be distinguished from the external forces. Furthermore, the clamping position was found to have a pronounced influence on the eigenmodes of the structure, and this effect should be taken into account for accurate force measurements.

  1. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method gives a more varied distribution of SM than those derived from TDR measurements. It should be noted that reducing the number of samples in the measuring grid leads to flattening the distribution of SM from both methods and increasing the estimation error at the same time. Grid of sensors for permanent measurement points should include points that have similar distributions of SM in the vicinity. Results of the analysis including number, the maximum correlation ranges and the acceptable estimation error should be taken into account when choosing of the measurement points. Adoption or possible adjustment of the distribution of the measurement points should be verified by performing additional measuring campaigns during the dry and wet periods. Presented approach seems to be appropriate for creation of regional-scale test (super) sites, to validate products of satellites equipped with SAR (Synthetic Aperture Radar), operating in C-band, with spatial resolution suited to single field scale, as for example: ERS-1, ERS-2, Radarsat and Sentinel-1, which is going to be launched in next few months. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.

  2. Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing

    PubMed Central

    Lefebvre, Germain; Blakemore, Sarah-Jayne

    2017-01-01

    Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice. PMID:28800597

  3. Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing.

    PubMed

    Palminteri, Stefano; Lefebvre, Germain; Kilford, Emma J; Blakemore, Sarah-Jayne

    2017-08-01

    Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.

  4. Development of a Methodology to Optimally Allocate Visual Inspection Time

    DTIC Science & Technology

    1989-06-01

    Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are

  5. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2014-10-01

    A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.

  6. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2015-03-01

    A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.

  7. 12 CFR 205.15 - Electronic fund transfer of government benefits.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Account balance. The means by which the consumer may obtain information concerning the account balance... history or other account information, under paragraph (c) of this section, in which the error is first... consumer for use in initiating an electronic fund transfer of government benefits from an account, other...

  8. Highlights of TOMS Version 9 Total Ozone Algorithm

    NASA Technical Reports Server (NTRS)

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.

  9. Adaptive tracking of a time-varying field with a quantum sensor

    NASA Astrophysics Data System (ADS)

    Bonato, Cristian; Berry, Dominic W.

    2017-05-01

    Sensors based on single spins can enable magnetic-field detection with very high sensitivity and spatial resolution. Previous work has concentrated on sensing of a constant magnetic field or a periodic signal. Here, we instead investigate the problem of estimating a field with nonperiodic variation described by a Wiener process. We propose and study, by numerical simulations, an adaptive tracking protocol based on Bayesian estimation. The tracking protocol updates the probability distribution for the magnetic field based on measurement outcomes and adapts the choice of sensing time and phase in real time. By taking the statistical properties of the signal into account, our protocol strongly reduces the required measurement time. This leads to a reduction of the error in the estimation of a time-varying signal by up to a factor of four compare with protocols that do not take this information into account.

  10. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  11. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    USDA-ARS?s Scientific Manuscript database

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  12. Chromosomal locus tracking with proper accounting of static and dynamic errors

    PubMed Central

    Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.

    2015-01-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object’s motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics (“static error”) and motion blur due to finite exposure time (“dynamic error”) on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors. PMID:26172745

  13. Analyzing temozolomide medication errors: potentially fatal.

    PubMed

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  14. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  15. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  16. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  17. Finite difference schemes for long-time integration

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1993-01-01

    Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

  18. Toward refined estimates of ambient PM2.5 exposure: Evaluation of a physical outdoor-to-indoor transport model

    NASA Astrophysics Data System (ADS)

    Hodas, Natasha; Meng, Qingyu; Lunden, Melissa M.; Turpin, Barbara J.

    2014-02-01

    Because people spend the majority of their time indoors, the variable efficiency with which ambient PM2.5 penetrates and persists indoors is a source of error in epidemiologic studies that use PM2.5 concentrations measured at central-site monitors as surrogates for ambient PM2.5 exposure. To reduce this error, practical methods to model indoor concentrations of ambient PM2.5 are needed. Toward this goal, we evaluated and refined an outdoor-to-indoor transport model using measured indoor and outdoor PM2.5 species concentrations and air exchange rates from the Relationships of Indoor, Outdoor, and Personal Air Study. Herein, we present model evaluation results, discuss what data are most critical to prediction of residential exposures at the individual-subject and populations levels, and make recommendations for the application of the model in epidemiologic studies. This paper demonstrates that not accounting for certain human activities (air conditioning and heating use, opening windows) leads to bias in predicted residential PM2.5 exposures at the individual-subject level, but not the population level. The analyses presented also provide quantitative evidence that shifts in the gas-particle partitioning of ambient organics with outdoor-to-indoor transport contribute significantly to variability in indoor ambient organic carbon concentrations and suggest that methods to account for these shifts will further improve the accuracy of outdoor-to-indoor transport models.

  19. Effects of past and recent blood pressure and cholesterol level on coronary heart disease and stroke mortality, accounting for measurement error.

    PubMed

    Boshuizen, Hendriek C; Lanti, Mariapaola; Menotti, Alessandro; Moschandreas, Joanna; Tolonen, Hanna; Nissinen, Aulikki; Nedeljkovic, Srecko; Kafatos, Anthony; Kromhout, Daan

    2007-02-15

    The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality due to coronary heart disease and stroke among subjects aged 65 years or more from nine cohorts of the Seven Countries Study. The two-step method of Tsiatis et al. (J Am Stat Assoc 1995;90:27-37) was used to adjust for regression dilution bias, and results were compared with those obtained using more commonly applied methods of adjustment for regression dilution bias. It was found that the commonly used univariate adjustment for regression dilution bias overestimates the effects of both SBP and cholesterol compared with multivariate methods. Also, the two-step method makes better use of the information available, resulting in smaller confidence intervals. Results comparing recent and past exposure indicated that past SBP is more important than recent SBP in terms of its effect on coronary heart disease mortality, while both recent and past values seem to be important for effects of cholesterol on coronary heart disease mortality and effects of SBP on stroke mortality. Associations between serum cholesterol concentration and risk of stroke mortality are weak.

  20. Climate Model Ensemble Methodology: Rationale and Challenges

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.; Myrvold, W.

    2012-12-01

    A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.

  1. Evidence-based pathology: umbilical cord coiling.

    PubMed

    Khong, T Y

    2010-12-01

    The generation of a pathology test result must be based on criteria that are proven to be acceptably reproducible and clinically relevant to be evidence-based. This review de-constructs the umbilical cord coiling index to illustrate how it can stray from being evidence-based. Publications related to umbilical cord coiling were retrieved and analysed with regard to how the umbilical coiling index was calculated, abnormal coiling was defined and reference ranges were constructed. Errors and other influences that can occur with the measurement of the length of the umbilical cord or of the number of coils can compromise the generation of the coiling index. Definitions of abnormal coiling are not consistent in the literature. Reference ranges defining hypocoiling or hypercoiling have not taken those potential errors or the possible effect of gestational age into account. Even the way numerical test results in anatomical pathology are generated, as illustrated by the umbilical coiling index, warrants a critical analysis into its evidence base to ensure that they are reproducible or free from errors.

  2. The validity of two clinical tests of visual-motor perception.

    PubMed

    Wallbrown, J D; Wallbrown, F H; Engin, A W

    1977-04-01

    The study investigated the relative efficiency of the Bender and MPD as assessors of achievement-related errors in visual-motor perception. Clinical experience with these two tests suggests that beyond first grade the MPD is more sensitive than the Bender for purposes of measuring deficits in visual-motor perception that interfere with effective classroom learning. The sample was composed of 153 third-grade children from two upper-middle-class elementary schools in a surburban school system in central Ohio. For three of the four achievement criteria, the results were clearly congruent with the hypothesis stated above. That is, SpCD errors from the MPD not only showed significantly higher negative rs with the criteria (reading vocabulary, reading comprehension, and mathematics computation) than Koppitz errors from the Bender, but also accounted for a much higher proportion of the variance in these criteria. Thus, the findings suggest that psychologists engaged in the assessment of older children seriously should consider adding the MPD to their assessment battery.

  3. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  4. Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents

    NASA Astrophysics Data System (ADS)

    Nilsson, Sarah J.

    Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no statistically significant relationship between groups. Binary logistic regression indicate that recent flight experience does not reliably distinguish between pilot error and non-pilot error accidents for TPE/TNPE, chi2 = 0.040 (df=1, p = .841) and CPE/CNPE, chi2= 0.074 (df =1, p = .786). Future research could focus on different pilot populations, and to broaden the scope, analyze several years of data.

  5. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  6. Correction of Measured Taxicab Exhaust Emission Data Based on Cmem Modle

    NASA Astrophysics Data System (ADS)

    Li, Q.; Jia, T.

    2017-09-01

    Carbon dioxide emissions from urban road traffic mainly come from automobile exhaust. However, the carbon dioxide emissions obtained by the instruments are unreliable due to time delay error. In order to improve the reliability of data, we propose a method to correct the measured vehicles' carbon dioxide emissions from instrument based on the CMEM model. Firstly, the synthetic time series of carbon dioxide emissions are simulated by CMEM model and GPS velocity data. Then, taking the simulation data as the control group, the time delay error of the measured carbon dioxide emissions can be estimated by the asynchronous correlation analysis, and the outliers can be automatically identified and corrected using the principle of DTW algorithm. Taking the taxi trajectory data of Wuhan as an example, the results show that (1) the correlation coefficient between the measured data and the control group data can be improved from 0.52 to 0.59 by mitigating the systematic time delay error. Furthermore, by adjusting the outliers which account for 4.73 % of the total data, the correlation coefficient can raise to 0.63, which suggests strong correlation. The construction of low carbon traffic has become the focus of the local government. In order to respond to the slogan of energy saving and emission reduction, the distribution of carbon emissions from motor vehicle exhaust emission was studied. So our corrected data can be used to make further air quality analysis.

  7. Are gestational age, birth weight, and birth length indicators of favorable fetal growth conditions? A structural equation analysis of Filipino infants.

    PubMed

    Bollen, Kenneth A; Noble, Mark D; Adair, Linda S

    2013-07-30

    The fetal origins hypothesis emphasizes the life-long health impacts of prenatal conditions. Birth weight, birth length, and gestational age are indicators of the fetal environment. However, these variables often have missing data and are subject to random and systematic errors caused by delays in measurement, differences in measurement instruments, and human error. With data from the Cebu (Philippines) Longitudinal Health and Nutrition Survey, we use structural equation models, to explore random and systematic errors in these birth outcome measures, to analyze how maternal characteristics relate to birth outcomes, and to take account of missing data. We assess whether birth weight, birth length, and gestational age are influenced by a single latent variable that we call favorable fetal growth conditions (FFGC) and if so, which variable is most closely related to FFGC. We find that a model with FFGC as a latent variable fits as well as a less parsimonious model that has birth weight, birth length, and gestational age as distinct individual variables. We also demonstrate that birth weight is more reliably measured than is gestational age. FFGCs were significantly influenced by taller maternal stature, better nutritional stores indexed by maternal arm fat and muscle area during pregnancy, higher birth order, avoidance of smoking, and maternal age 20-35 years. Effects of maternal characteristics on newborn weight, length, and gestational age were largely indirect, operating through FFGC. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Prevalence of refractive error and visual impairment among rural school-age children of Goro District, Gurage Zone, Ethiopia.

    PubMed

    Kedir, Jafer; Girma, Abonesh

    2014-10-01

    Refractive error is one of the major causes of blindness and visual impairment in children; but community based studies are scarce especially in rural parts of Ethiopia. So, this study aims to assess the prevalence of refractive error and its magnitude as a cause of visual impairment among school-age children of rural community. This community-based cross-sectional descriptive study was conducted from March 1 to April 30, 2009 in rural villages of Goro district of Gurage Zone, found south west of Addis Ababa, the capital of Ethiopia. A multistage cluster sampling method was used with simple random selection of representative villages in the district. Chi-Square and t-tests were used in the data analysis. A total of 570 school-age children (age 7-15) were evaluated, 54% boys and 46% girls. The prevalence of refractive error was 3.5% (myopia 2.6% and hyperopia 0.9%). Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing corrective spectacles during the study period. Refractive error was the commonest cause of visual impairment in children of the district, but no measures were taken to reduce the burden in the community. So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised to provide low cost corrective spectacles in the rural community.

  9. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations: 1. Forward model, error analysis, and information content

    NASA Astrophysics Data System (ADS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-05-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  10. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content.

    PubMed

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-05-27

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness ( τ ), effective radius ( r eff ), and cloud-top height ( h ). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  11. Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar

    NASA Technical Reports Server (NTRS)

    Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.

    2012-01-01

    In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.

  12. The Reliability of a Three-Dimensional Photo System- (3dMDface-) Based Evaluation of the Face in Cleft Lip Infants

    PubMed Central

    Ort, Rebecca; Metzler, Philipp; Kruse, Astrid L.; Matthews, Felix; Zemann, Wolfgang; Grätz, Klaus W.; Luebbers, Heinz-Theo

    2012-01-01

    Ample data exists about the high precision of three-dimensional (3D) scanning devices and their data acquisition of the facial surface. However, a question remains regarding which facial landmarks are reliable if identified in 3D images taken under clinical circumstances. Sources of error to be addressed could be technical, user dependent, or patient respectively anatomy related. Based on clinical 3D photos taken with the 3dMDface system, the intra observer repeatability of 27 facial landmarks in six cleft lip (CL) infants and one non-CL infant was evaluated based on a total of over 1,100 measurements. Data acquisition was sometimes challenging but successful in all patients. The mean error was 0.86 mm, with a range of 0.39 mm (Exocanthion) to 2.21 mm (soft gonion). Typically, landmarks provided a small mean error but still showed quite a high variance in measurements, for example, exocanthion from 0.04 mm to 0.93 mm. Vice versa, relatively imprecise landmarks still provide accurate data regarding specific spatial planes. One must be aware of the fact that the degree of precision is dependent on landmarks and spatial planes in question. In clinical investigations, the degree of reliability for landmarks evaluated should be taken into account. Additional reliability can be achieved via multiple measuring. PMID:22919476

  13. Claims, errors, and compensation payments in medical malpractice litigation.

    PubMed

    Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A

    2006-05-11

    In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.

  14. Comparing errors in Medicaid reporting across surveys: evidence to date.

    PubMed

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-04-01

    To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. All available validation studies. Compare results from existing research to understand variation in reporting across surveys. Synthesize all available studies validating survey reports of Medicaid coverage. Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. © Health Research and Educational Trust.

  15. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    NASA Technical Reports Server (NTRS)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  16. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  17. Performance improvement of a binary quantized all-digital phase-locked loop with a new aided-acquisition technique

    NASA Astrophysics Data System (ADS)

    Sandoz, J.-P.; Steenaart, W.

    1984-12-01

    The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.

  18. Does Field Reliability for Static-99 Scores Decrease as Scores Increase?

    PubMed Central

    Rice, Amanda K.; Boccaccini, Marcus T.; Harris, Paige B.; Hawes, Samuel W.

    2015-01-01

    This study examined the field reliability of Static-99 (Hanson & Thornton, 2000) scores among 21,983 sex offenders and focused on whether rater agreement decreased as scores increased. As expected, agreement was lowest for high-scoring offenders. Initial and most recent Static-99 scores were identical for only about 40% of offenders who had been assigned a score of 6 during their initial evaluations, but for more than 60% of offenders who had been assigned a score of 2 or lower. In addition, the size of the difference between scores increased as scores increased, with pairs of scores differing by 2 or more points for about 30% of offenders scoring in the high-risk range. Because evaluators and systems use high Static-99 scores to identify sexual offenders who may require intensive supervision or even postrelease civil commitment, it is important to recognize that there may be more measurement error for high scores than low scores and to consider adopting procedures for minimizing or accounting for measurement error. PMID:24932647

  19. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  20. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers.

    PubMed

    Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P

    2017-01-01

    Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.

  1. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    NASA Astrophysics Data System (ADS)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  2. Using cognitive status to predict crash risk: blazing new trails?

    PubMed

    Staplin, Loren; Gish, Kenneth W; Sifrit, Kathy J

    2014-02-01

    A computer-based version of an established neuropsychological paper-and-pencil assessment tool, the Trail-Making Test, was applied with approximately 700 drivers aged 70 years and older in offices of the Maryland Motor Vehicle Administration. This was a volunteer sample that received a small compensation for study participation, with an assurance that their license status would not be affected by the results. Analyses revealed that the study sample was representative of Maryland older drivers with respect to age and indices of prior driving safety. The relationship between drivers' scores on the Trail-Making Test and prospective crash experience was analyzed using a new outcome measure that explicitly takes into account error responses as well as correct responses, the error-compensated completion time. For the only reliable predictor of crash risk, Trail-Making Test Part B, this measure demonstrated a modest gain in specificity and was a more significant predictor of future safety risk than the simple time-to-completion measure. Improved specificity and the potential for autonomous test administration are particular advantages of this measure for use with large populations, in settings such as health care or driver licensing. © 2013.

  3. Cosmological measurements with general relativistic galaxy correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth

    We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less

  4. Potential and Limitations of an Improved Method to Produce Dynamometric Wheels

    PubMed Central

    García de Jalón, Javier

    2018-01-01

    A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427

  5. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  6. A Liberal Account of Addiction

    PubMed Central

    Foddy, Bennett; Savulescu, Julian

    2014-01-01

    Philosophers and psychologists have been attracted to two differing accounts of addictive motivation. In this paper, we investigate these two accounts and challenge their mutual claim that addictions compromise a person’s self-control. First, we identify some incompatibilities between this claim of reduced self-control and the available evidence from various disciplines. A critical assessment of the evidence weakens the empirical argument for reduced autonomy. Second, we identify sources of unwarranted normative bias in the popular theories of addiction that introduce systematic errors in interpreting the evidence. By eliminating these errors, we are able to generate a minimal, but correct account, of addiction that presumes addicts to be autonomous in their addictive behavior, absent further evidence to the contrary. Finally, we explore some of the implications of this minimal, correct view. PMID:24659901

  7. Role of Grammatical Gender and Semantics in German Word Production

    ERIC Educational Resources Information Center

    Vigliocco, Gabriella; Vinson, David P.; Indefrey, Peter; Levelt, Willem J. M.; Hellwig, Frauke

    2004-01-01

    Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Mane, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender…

  8. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  9. 26 CFR 301.6213-1 - Restrictions applicable to deficiencies; petition to Tax Court.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... restrictions on assessment of deficiencies—(1) Mathematical errors. If a taxpayer is notified of an additional amount of tax due on account of a mathematical error appearing upon the return, such notice is not deemed... to a mathematical error appearing on the return. That is, the district director or the director of...

  10. Some effects of finite spatial resolution on skin friction measurements in turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Westphal, Russell V.

    1988-01-01

    The effects of finite spatial resolution often cause serious errors in measurements in turbulent boundary layers, with particularly large effects for measurements of fluctuating skin friction and velocities within the sublayer. However, classical analyses of finite spatial resolution effects have generally not accounted for the substantial inhomogeneity and anisotropy of near-wall turbulence. The present study has made use of results from recent computational simulations of wall-bounded turbulent flows to examine spatial resolution effects for measurements made at a wall using both single-sensor probes and those employing two sensing volumes in a V shape. Results are presented to show the effects of finite spatial resolution on a variety of quantitites deduced from the skin friction field.

  11. A method for accounting for test fixture compliance when estimating proximal femur stiffness.

    PubMed

    Rossman, Timothy; Dragomir-Daescu, Dan

    2016-09-06

    Fracture testing of cadaveric femora to obtain strength and stiffness information is an active area of research in developing tools for diagnostic prediction of bone strength. These measurements are often used in the estimation and validation of companion finite element models constructed from the femora CT scan data, therefore, the accuracy of the data is of paramount importance. However, experimental stiffness calculated from force-displacement data has largely been ignored by most researchers due to inherent error in the differential displacement measurement obtained when not accounting for testing apparatus compliance. However, having such information is necessary for validation of computational models. Even in the few cases when fixture compliance was considered the measurements showed large lab-to-lab variation due to lack of standardization in fixture design. We examined the compliance of our in-house designed cadaveric femur test fixture to determine the errors we could expect when calculating stiffness from the collected experimental force-displacement data and determined the stiffness of the test fixture to be more than 10 times the stiffness of the stiffest femur in a sample of 44 femora. When correcting the apparent femur stiffness derived from the original data, we found that the largest stiffness was underestimated by about 10%. The study confirmed that considering test fixture compliance is a necessary step in improving the accuracy of fracture test data for characterizing femur stiffness, and highlighted the need for test fixture design standardization for proximal femur fracture testing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    PubMed

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.

  13. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  14. Improved accuracy of solar energy system testing and measurements

    NASA Astrophysics Data System (ADS)

    Waterman, R. E.

    1984-12-01

    A real world example is provided of recovery of data on the performance of a solar collector system in the field. Kalman filters were devised to reconstruct data from sensors which had functioned only intermittently over the 3-day trial period designed to quantify phenomena in the collector loop, i.e., hot water delivered to storage. The filter was configured to account for errors in data on the heat exchanger coil differential temperature and mass flow rate. Data were then generated based on a matrix of state equations, taking into account the presence of time delays due to tank stratification and convective flows. Good correlations were obtained with data from other sensors for the flow rate, system temperatures and the energy delivered to storage.

  15. Experimental test of visuomotor updating models that explain perisaccadic mislocalization.

    PubMed

    Van Wetter, Sigrid M C I; Van Opstal, A John

    2008-10-23

    Localization of a brief visual target is inaccurate when presented around saccade onset. Perisaccadic mislocalization is maximal in the saccade direction and varies systematically with the target-saccade onset disparity. It has been hypothesized that this effect is either due to a sluggish representation of eye position, to low-pass filtering of the visual event, to saccade-induced compression of visual space, or to a combination of these effects. Despite their differences, these schemes all predict that the pattern of localization errors varies systematically with the saccade amplitude and kinematics. We tested these predictions for the double-step paradigm by analyzing the errors for saccades of widely varying amplitudes. Our data show that the measured error patterns are only mildly influenced by the primary-saccade amplitude over a large range of saccade properties. An alternative possibility, better accounting for the data, assumes that around saccade onset perceived target location undergoes a uniform shift in the saccade direction that varies with amplitude only for small saccades. The strength of this visual effect saturates at about 10 deg and also depends on target duration. Hence, we propose that perisaccadic mislocalization results from errors in visual-spatial perception rather than from sluggish oculomotor feedback.

  16. Evaluation of FNS control systems: software development and sensor characterization.

    PubMed

    Riess, J; Abbas, J J

    1997-01-01

    Functional Neuromuscular Stimulation (FNS) systems activate paralyzed limbs by electrically stimulating motor neurons. These systems have been used to restore functions such as standing and stepping in people with thoracic level spinal cord injury. Research in our laboratory is directed at the design and evaluation of the control algorithms for generating posture and movement. This paper describes software developed for implementing FNS control systems and the characterization of a sensor system used to implement and evaluate controllers in the laboratory. In order to assess FNS control algorithms, we have developed a versatile software package using Lab VIEW (National Instruments, Corp). This package provides the ability to interface with sensor systems via serial port or A/D board, implement data processing and real-time control algorithms, and interface with neuromuscular stimulation devices. In our laboratory, we use the Flock of Birds (Ascension Technology Corp.) motion tracking sensor system to monitor limb segment position and orientation (6 degrees of freedom). Errors in the sensor system have been characterized and nonlinear polynomial models have been developed to account for these errors. With this compensation, the error in the distance measurement is reduced by 90 % so that the maximum error is less than 1 cm.

  17. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  18. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

    PubMed

    Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

    2013-01-01

    Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

  19. Quantifying acoustic doppler current profiler discharge uncertainty: A Monte Carlo based tool for moving-boat measurements

    USGS Publications Warehouse

    Mueller, David S.

    2017-01-01

    This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when evaluating the uncertainty of moving-boat ADCP measurements.

  20. Correlators in simultaneous measurement of non-commuting qubit observables

    NASA Astrophysics Data System (ADS)

    Atalaya, Juan; Hacohen-Gourgy, Shay; Martin, Leigh S.; Siddiqi, Irfan; Korotkov, Alexander N.

    We consider simultaneous continuous measurement of non-commuting qubit observables and analyze multi-time correlators 〈i κ1 (t1) ^i κN (tN) 〉 for output signals i κ (t) from the detectors. Both informational (''spooky'') and phase backactions from cQED-type measurements with phase-sensitive amplifiers are taken into account. We find an excellent agreement between analytical results and experimental data for two-time correlators of the output signals from simultaneous measurement of qubit observables σx and σφ =σx cosφ +σy sinφ . The correlators can be used to extract small deviations of experimental parameters, e.g., phase backaction and residual Rabi frequency. The multi-time correlators are important in analysis of Bacon-Shor error correction/detection codes, operated with continuous measurements.

  1. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  2. Biases in Planet Occurrence Caused by Unresolved Binaries in Transit Surveys

    NASA Astrophysics Data System (ADS)

    Bouma, L. G.; Masuda, Kento; Winn, Joshua N.

    2018-06-01

    Wide-field surveys for transiting planets, such as the NASA Kepler and TESS missions, are usually conducted without knowing which stars have binary companions. Unresolved and unrecognized binaries give rise to systematic errors in planet occurrence rates, including misclassified planets and mistakes in completeness corrections. The individual errors can have different signs, making it difficult to anticipate the net effect on inferred occurrence rates. Here, we use simplified models of signal-to-noise limited transit surveys to try and clarify the situation. We derive a formula for the apparent occurrence rate density measured by an observer who falsely assumes all stars are single. The formula depends on the binary fraction, the mass function of the secondary stars, and the true occurrence of planets around primaries, secondaries, and single stars. It also takes into account the Malmquist bias by which binaries are over-represented in flux-limited samples. Application of the formula to an idealized Kepler-like survey shows that for planets larger than 2 R ⊕, the net systematic error is of order 5%. In particular, unrecognized binaries are unlikely to be the reason for the apparent discrepancies between hot-Jupiter occurrence rates measured in different surveys. For smaller planets the errors are potentially larger: the occurrence of Earth-sized planets could be overestimated by as much as 50%. We also show that whenever high-resolution imaging reveals a transit host star to be a binary, the planet is usually more likely to orbit the primary star than the secondary star.

  3. Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.

    PubMed

    Burdorf, A

    1995-02-01

    The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.

  4. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  5. Benefit transfer and spatial heterogeneity of preferences for water quality improvements.

    PubMed

    Martin-Ortega, J; Brouwer, R; Ojea, E; Berbel, J

    2012-09-15

    The improvement in the water quality resulting from the implementation of the EU Water Framework Directive is expected to generate substantial non-market benefits. A wide spread estimation of these benefits across Europe will require the application of benefit transfer. We use a spatially explicit valuation design to account for the spatial heterogeneity of preferences to help generate lower transfer errors. A map-based choice experiment is applied in the Guadalquivir River Basin (Spain), accounting simultaneously for the spatial distribution of water quality improvements and beneficiaries. Our results show that accounting for the spatial heterogeneity of preferences generally produces lower transfer errors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. A Study of Upper Error Limits in Accounting Populations.

    DTIC Science & Technology

    1986-09-01

    The error amount intensity is a population characteristic obtained by dividing the total...423.36/$763,931.19). This population characteristic is of interest because the results of the simulation done for research questions four through v.o

  7. Weights and measures: a new look at bisection behaviour in neglect.

    PubMed

    McIntosh, Robert D; Schindler, Igor; Birchall, Daniel; Milner, A David

    2005-12-01

    Horizontal line bisection is a ubiquitous task in the investigation of visual neglect. Patients with left neglect typically make rightward errors that increase with line length and for lines at more leftward positions. For short lines, or for lines presented in right space, these errors may 'cross over' to become leftward. We have taken a new approach to these phenomena by employing a different set of dependent and independent variables for their description. Rather than recording bisection error, we record the lateral position of the response within the workspace. We have studied how this varies when the locations of the left and right endpoints are manipulated independently. Across 30 patients with left neglect, we have observed a characteristic asymmetry between the 'weightings' accorded to the two endpoints, such that responses are less affected by changes in the location of the left endpoint than by changes in the location of the right. We show that a simple endpoint weightings analysis accounts readily for the effects of line length and spatial position, including cross-over effects, and leads to an index of neglect that is more sensitive than the standard measure. We argue that this novel approach is more parsimonious than the standard model and yields fresh insights into the nature of neglect impairment.

  8. The impact of modelling errors on interferometer calibration for 21 cm power spectra

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline

    2017-09-01

    We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.

  9. Uncertainty Analysis in Large Area Aboveground Biomass Mapping

    NASA Astrophysics Data System (ADS)

    Baccini, A.; Carvalho, L.; Dubayah, R.; Goetz, S. J.; Friedl, M. A.

    2011-12-01

    Satellite and aircraft-based remote sensing observations are being more frequently used to generate spatially explicit estimates of aboveground carbon stock of forest ecosystems. Because deforestation and forest degradation account for circa 10% of anthropogenic carbon emissions to the atmosphere, policy mechanisms are increasingly recognized as a low-cost mitigation option to reduce carbon emission. They are, however, contingent upon the capacity to accurately measures carbon stored in the forests. Here we examine the sources of uncertainty and error propagation in generating maps of aboveground biomass. We focus on characterizing uncertainties associated with maps at the pixel and spatially aggregated national scales. We pursue three strategies to describe the error and uncertainty properties of aboveground biomass maps, including: (1) model-based assessment using confidence intervals derived from linear regression methods; (2) data-mining algorithms such as regression trees and ensembles of these; (3) empirical assessments using independently collected data sets.. The latter effort explores error propagation using field data acquired within satellite-based lidar (GLAS) acquisitions versus alternative in situ methods that rely upon field measurements that have not been systematically collected for this purpose (e.g. from forest inventory data sets). A key goal of our effort is to provide multi-level characterizations that provide both pixel and biome-level estimates of uncertainties at different scales.

  10. Competitive action video game players display rightward error bias during on-line video game play.

    PubMed

    Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria

    2017-09-12

    Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.

  11. Detecting and Correcting Errors in Rapid Aiming Movements: Effects of Movement Time, Distance, and Velocity

    ERIC Educational Resources Information Center

    Sherwood, David E.

    2010-01-01

    According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…

  12. Accounting for substitution and spatial heterogeneity in a labelled choice experiment.

    PubMed

    Lizin, S; Brouwer, R; Liekens, I; Broeckx, S

    2016-10-01

    Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  14. Noninvasive Intracranial Pressure Determination in Patients with Subarachnoid Hemorrhage.

    PubMed

    Noraky, James; Verghese, George C; Searls, David E; Lioutas, Vasileios A; Sonni, Shruti; Thomas, Ajith; Heldt, Thomas

    2016-01-01

    Intracranial pressure (ICP) should ideally be measured in many conditions affecting the brain. The invasiveness and associated risks of the measurement modalities in current clinical practice restrict ICP monitoring to a small subset of patients whose diagnosis and treatment could benefit from ICP measurement. To expand validation of a previously proposed model-based approach to continuous, noninvasive, calibration-free, and patient-specific estimation of ICP to patients with subarachnoid hemorrhage (SAH), we made waveform recordings of cerebral blood flow velocity in several major cerebral arteries during routine, clinically indicated transcranial Doppler examinations for vasospasm, along with time-locked waveform recordings of radial artery blood pressure (APB), and ICP was measured via an intraventricular drain catheter. We also recorded the locations to which ICP and ABP were calibrated, to account for a possible hydrostatic pressure difference between measured ABP and the ABP value at a major cerebral vessel. We analyzed 21 data records from five patients and were able to identify 28 data windows from the middle cerebral artery that were of sufficient data quality for the ICP estimation approach. Across these windows, we obtained a mean estimation error of -0.7 mmHg and a standard deviation of the error of 4.0 mmHg. Our estimates show a low bias and reduced variability compared with those we have reported before.

  15. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere.

    PubMed

    Vidovic, Luka; Majaron, Boris

    2014-02-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.

  16. Use of Tekscan K-Scan Sensors for Retropatellar Pressure Measurement Avoiding Errors during Implantation and the Effects of Shear Forces on the Measurement Precision

    PubMed Central

    Wilharm, A.; Hurschler, Ch.; Dermitas, T.; Bohnsack, M.

    2013-01-01

    Pressure-sensitive K-Scan 4000 sensors (Tekscan, USA) provide new possibilities for the dynamic measurement of force and pressure in biomechanical investigations. We examined the sensors to determine in particular whether they are also suitable for reliable measurements of retropatellar forces and pressures. Insertion approaches were also investigated and a lateral parapatellar arthrotomy supplemented by parapatellar sutures proved to be the most reliable method. The ten human cadaver knees were tested in a knee-simulating machine at a torque of 30 and 40 Nm. Each test cycle involved a dynamic extension from 120° flexion. All recorded parameters showed a decrease of 1-2% per measurement cycle. Although we supplemented the sensors with a Teflon film, the decrease, which was likely caused by shear force, was significant. We evaluated 12 cycles and observed a linear decrease in parameters up to 17.2% (coefficient of regression 0.69–0.99). In our opinion, the linear decrease can be considered a systematic error and can therefore be quantified and accounted for in subsequent experiments. That will ensure reliable retropatellar usage of Tekscan sensors and distinguish the effects of knee joint surgeries from sensor wear-related effects. PMID:24369018

  17. Incorporating the gas analyzer response time in gas exchange computations.

    PubMed

    Mitchell, R R

    1979-11-01

    A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.

  18. 7 CFR 276.2 - State agency liabilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...., errors by the personnel of issuance offices in the counting of coupon books); (iv) Coupons lost in... household's account, benefits drawn from an EBT account after the household has reported that the EBT card...

  19. 5 CFR 1605.21 - Plan-paid breakage and other corrections.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... participant's account in the wrong investment fund(s). (3) A participant will not be entitled to breakage.... 1605.21 Section 1605.21 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CORRECTION... investment gains or losses the account have received had the error not occurred, the account will be credited...

  20. 5 CFR 1605.21 - Plan-paid breakage and other corrections.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... participant's account in the wrong investment fund(s). (3) A participant will not be entitled to breakage.... 1605.21 Section 1605.21 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CORRECTION... investment gains or losses the account have received had the error not occurred, the account will be credited...

  1. Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments

    NASA Astrophysics Data System (ADS)

    Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel

    2017-05-01

    We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.

  2. Improved accuracy of ultrasound-guided therapies using electromagnetic tracking: in-vivo speed of sound measurements

    NASA Astrophysics Data System (ADS)

    Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.

    2017-02-01

    The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.

  3. Improved estimation of heavy rainfall by weather radar after reflectivity correction and accounting for raindrop size distribution variability

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2015-04-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  4. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  5. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    PubMed

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.

  6. A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales.

    PubMed

    Wensveen, Paul J; Thomas, Len; Miller, Patrick J O

    2015-01-01

    Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.

  7. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    PubMed

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  8. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  9. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    PubMed

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  10. Elevation Change of the Southern Greenland Ice Sheet from Satellite Radar Altimeter Data

    NASA Technical Reports Server (NTRS)

    Haines, Bruce J.

    1999-01-01

    Long-term changes in the thickness of the polar ice sheets are important indicators of climate change. Understanding the contributions to the global water mass balance from the accumulation or ablation of grounded ice in Greenland and Antarctica is considered crucial for determining the source of the about 2 mm/yr sea-level rise in the last century. Though the Antarctic ice sheet is much larger than its northern counterpart, the Greenland ice sheet is more likely to undergo dramatic changes in response to a warming trend. This can be attributed to the warmer Greenland climate, as well as a potential for amplification of a global warming trend in the polar regions of the Northern Hemisphere. In collaboration with Drs. Curt Davis and Craig Kluever of the University of Missouri, we are using data from satellite radar altimeters to measure changes in the elevation of the Southern Greenland ice sheet from 1978 to the present. Difficulties with systematic altimeter measurement errors, particularly in intersatellite comparisons, beset earlier studies of the Greenland ice sheet thickness. We use altimeter data collected contemporaneously over the global ocean to establish a reference for correcting ice-sheet data. In addition, the waveform data from the ice-sheet radar returns are reprocessed to better determine the range from the satellite to the ice surface. At JPL, we are focusing our efforts principally on the reduction of orbit errors and range biases in the measurement systems on the various altimeter missions. Our approach emphasizes global characterization and reduction of the long-period orbit errors and range biases using altimeter data from NASA's Ocean Pathfinder program. Along-track sea-height residuals are sequentially filtered and backwards smoothed, and the radial orbit errors are modeled as sinusoids with a wavelength equal to one revolution of the satellite. The amplitudes of the sinusoids are treated as exponentially-correlated noise processes with a time-constant of six days. Measurement errors (e.g., altimeter range bias) are simultaneously recovered as constant parameters. The corrections derived from the global ocean analysis are then applied over the Greenland ice sheet. The orbit error and measurement bias corrections for different missions are developed in a single framework to enable robust linkage of ice-sheet measurements from 1978 to the present. In 1998, we completed our re-evaluation of the 1978 Seasat and 1985-1989 Geosat Exact Repeat Mission data. The estimates of ice thickness over Southern Greenland (south of 72N and above 2000 m) from 1978 to 1988 show large regional variations (+/-18 cm/yr), but yield an overall rate of +1.5 +/- 0.5 cm/yr (one standard error). Accounting for systematic errors, the estimate may not be significantly different from the null growth rate. The average elevation change from 1978 to 1988 is too small to assess whether the Greenland ice sheet is undergoing a long-term change.

  11. Study on analysis from sources of error for Airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

    2016-11-01

    With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

  12. Analyzing average and conditional effects with multigroup multilevel structural equation models

    PubMed Central

    Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf

    2014-01-01

    Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668

  13. The Strategies to Homogenize PET/CT Metrics: The Case of Onco-Haematological Clinical Trials

    PubMed Central

    Chauvie, Stephane; Bergesio, Fabrizio

    2016-01-01

    Positron emission tomography (PET) has been a widely used tool in oncology for staging lymphomas for a long time. Recently, several large clinical trials demonstrated its utility in therapy management during treatment, paving the way to personalized medicine. In doing so, the traditional way of reporting PET based on the extent of disease has been complemented by a discrete scale that takes in account tumour metabolism. However, due to several technical, physical and biological limitations in the use of PET uptake as a biomarker, stringent rules have been used in clinical trials to reduce the errors in its evaluation. Within this manuscript we will describe shortly the evolution in PET reporting, examine the main errors in uptake measurement, and analyse which strategy the clinical trials applied to reduce them. PMID:28536393

  14. Comparing Errors in Medicaid Reporting across Surveys: Evidence to Date

    PubMed Central

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-01-01

    Objective To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. Data Sources All available validation studies. Study Design Compare results from existing research to understand variation in reporting across surveys. Data Collection Methods Synthesize all available studies validating survey reports of Medicaid coverage. Principal Findings Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Conclusions Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. PMID:22816493

  15. Performance evaluation of wireless communications through capsule endoscope.

    PubMed

    Takizawa, Kenichi; Aoyagi, Takahiro; Hamaguchi, Kiyoshi; Kohno, Ryuji

    2009-01-01

    This paper presents a performance evaluation of wireless communications applicable into a capsule endoscope. A numerical model to describe the received signal strength (RSS) radiated from a capsule-sized signal generator is derived through measurements in which a liquid phantom that has equivalent electrical constants is used. By introducing this model and taking into account the characteristics of its direction pattern of the capsule and propagation distance between the implanted capsule and on-body antenna, a cumulative distribution function (CDF) of the received SNR is evaluated. Then, simulation results related to the error ratio in the wireless channel are obtained. These results show that the frequencies of 611 MHz or lesser would be useful for the capsule endoscope applications from the view point of error rate performance. Further, we show that the use of antenna diversity brings additional gain to this application.

  16. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation

    PubMed Central

    Balachandran, Ramya; Labadie, Robert F.

    2015-01-01

    Purpose A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. Methods An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. Results The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of 45° and higher as well as longer cantilevered drill lengths. Conclusion The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure. PMID:26183149

  17. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation.

    PubMed

    Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F

    2016-03-01

    A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.

  18. Improving the accuracy of S02 column densities and emission rates obtained from upward-looking UV-spectroscopic measurements of volcanic plumes by taking realistic radiative transfer into account

    USGS Publications Warehouse

    Kern, Christoph; Deutschmann, Tim; Werner, Cynthia; Sutton, A. Jeff; Elias, Tamar; Kelly, Peter J.

    2012-01-01

    Sulfur dioxide (SO2) is monitored using ultraviolet (UV) absorption spectroscopy at numerous volcanoes around the world due to its importance as a measure of volcanic activity and a tracer for other gaseous species. Recent studies have shown that failure to take realistic radiative transfer into account during the spectral retrieval of the collected data often leads to large errors in the calculated emission rates. Here, the framework for a new evaluation method which couples a radiative transfer model to the spectral retrieval is described. In it, absorption spectra are simulated, and atmospheric parameters are iteratively updated in the model until a best match to the measurement data is achieved. The evaluation algorithm is applied to two example Differential Optical Absorption Spectroscopy (DOAS) measurements conducted at Kilauea volcano (Hawaii). The resulting emission rates were 20 and 90% higher than those obtained with a conventional DOAS retrieval performed between 305 and 315 nm, respectively, depending on the different SO2 and aerosol loads present in the volcanic plume. The internal consistency of the method was validated by measuring and modeling SO2 absorption features in a separate wavelength region around 375 nm and comparing the results. Although additional information about the measurement geometry and atmospheric conditions is needed in addition to the acquired spectral data, this method for the first time provides a means of taking realistic three-dimensional radiative transfer into account when analyzing UV-spectral absorption measurements of volcanic SO2 plumes.

  19. Prevalence of teen driver errors leading to serious motor vehicle crashes.

    PubMed

    Curry, Allison E; Hafetz, Jessica; Kallan, Michael J; Winston, Flaura K; Durbin, Dennis R

    2011-07-01

    Motor vehicle crashes are the leading cause of adolescent deaths. Programs and policies should target the most common and modifiable reasons for crashes. We estimated the frequency of critical reasons for crashes involving teen drivers, and examined in more depth specific teen driver errors. The National Highway Traffic Safety Administration's (NHTSA) National Motor Vehicle Crash Causation Survey collected data at the scene of a nationally representative sample of 5470 serious crashes between 7/05 and 12/07. NHTSA researchers assigned a single driver, vehicle, or environmental factor as the critical reason for the event immediately leading to each crash. We analyzed crashes involving 15-18 year old drivers. 822 teen drivers were involved in 795 serious crashes, representing 335,667 teens in 325,291 crashes. Driver error was by far the most common reason for crashes (95.6%), as opposed to vehicle or environmental factors. Among crashes with a driver error, a teen made the error 79.3% of the time (75.8% of all teen-involved crashes). Recognition errors (e.g., inadequate surveillance, distraction) accounted for 46.3% of all teen errors, followed by decision errors (e.g., following too closely, too fast for conditions) (40.1%) and performance errors (e.g., loss of control) (8.0%). Inadequate surveillance, driving too fast for conditions, and distracted driving together accounted for almost half of all crashes. Aggressive driving behavior, drowsy driving, and physical impairments were less commonly cited as critical reasons. Males and females had similar proportions of broadly classified errors, although females were specifically more likely to make inadequate surveillance errors. Our findings support prioritization of interventions targeting driver distraction and surveillance and hazard awareness training. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Toward refined estimates of ambient PM2.5 exposure: Evaluation of a physical outdoor-to-indoor transport model

    PubMed Central

    Hodas, Natasha; Meng, Qingyu; Lunden, Melissa M.; Turpin, Barbara J.

    2014-01-01

    Because people spend the majority of their time indoors, the variable efficiency with which ambient PM2.5 penetrates and persists indoors is a source of error in epidemiologic studies that use PM2.5 concentrations measured at central-site monitors as surrogates for ambient PM2.5 exposure. To reduce this error, practical methods to model indoor concentrations of ambient PM2.5 are needed. Toward this goal, we evaluated and refined an outdoor-to-indoor transport model using measured indoor and outdoor PM2.5 species concentrations and air exchange rates from the Relationships of Indoor, Outdoor, and Personal Air Study. Herein, we present model evaluation results, discuss what data are most critical to prediction of residential exposures at the individual-subject and populations levels, and make recommendations for the application of the model in epidemiologic studies. This paper demonstrates that not accounting for certain human activities (air conditioning and heating use, opening windows) leads to bias in predicted residential PM2.5 exposures at the individual-subject level, but not the population level. The analyses presented also provide quantitative evidence that shifts in the gas-particle partitioning of ambient organics with outdoor-to-indoor transport contribute significantly to variability in indoor ambient organic carbon concentrations and suggest that methods to account for these shifts will further improve the accuracy of outdoor-to-indoor transport models. PMID:25798047

Top