Science.gov

Sample records for 10-fold cross-validation accuracy

  1. The Cross-Validational Accuracy of Sample Regressions.

    ERIC Educational Resources Information Center

    Rozeboom, William W.

    1981-01-01

    Browne's definitive but complex formulas for the cross-validational accuracy of an OSL-estimated regression equation in the random-effects sampling model are here reworked to achieve greater perspicuity and extended to include the fixed-effects sampling model. (Author)

  2. Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data.

    PubMed

    Simon, Richard M; Subramanian, Jyothi; Li, Ming-Chung; Menezes, Supriya

    2011-05-01

    Developments in whole genome biotechnology have stimulated statistical focus on prediction methods. We review here methodology for classifying patients into survival risk groups and for using cross-validation to evaluate such classifications. Measures of discrimination for survival risk models include separation of survival curves, time-dependent ROC curves and Harrell's concordance index. For high-dimensional data applications, however, computing these measures as re-substitution statistics on the same data used for model development results in highly biased estimates. Most developments in methodology for survival risk modeling with high-dimensional data have utilized separate test data sets for model evaluation. Cross-validation has sometimes been used for optimization of tuning parameters. In many applications, however, the data available are too limited for effective division into training and test sets and consequently authors have often either reported re-substitution statistics or analyzed their data using binary classification methods in order to utilize familiar cross-validation. In this article we have tried to indicate how to utilize cross-validation for the evaluation of survival risk models; specifically how to compute cross-validated estimates of survival distributions for predicted risk groups and how to compute cross-validated time-dependent ROC curves. We have also discussed evaluation of the statistical significance of a survival risk model and evaluation of whether high-dimensional genomic data adds predictive accuracy to a model based on standard covariates alone.

  3. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  4. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  5. Cross-Validation.

    ERIC Educational Resources Information Center

    Langmuir, Charles R.

    1954-01-01

    Cross-validation in relation to choosing the best tests and selecting the best items in tests is discussed. Cross-validation demonstrated whether a decision derived from one set of data is truly effective when this decision is applied to another independent, but relevant, sample of people. Cross-validation is particularly important after…

  6. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  7. Cross-Validation, Shrinkage, and Multiple Regression.

    ERIC Educational Resources Information Center

    Hynes, Kevin

    One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…

  8. Cross-Validation of the Risk Matrix 2000 Sexual and Violent Scales

    ERIC Educational Resources Information Center

    Craig, Leam A.; Beech, Anthony; Browne, Kevin D.

    2006-01-01

    The predictive accuracy of the newly developed actuarial risk measures Risk Matrix 2000 Sexual/Violence (RMS, RMV) were cross validated and compared with two risk assessment measures (SVR-20 and Static-99) in a sample of sexual (n = 85) and nonsex violent (n = 46) offenders. The sexual offense reconviction rate for the sex offender group was 18%…

  9. Cross validation in LASSO and its acceleration

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-05-01

    We investigate leave-one-out cross validation (CV) as a determinator of the weight of the penalty term in the least absolute shrinkage and selection operator (LASSO). First, on the basis of the message passing algorithm and a perturbative discussion assuming that the number of observations is sufficiently large, we provide simple formulas for approximately assessing two types of CV errors, which enable us to significantly reduce the necessary cost of computation. These formulas also provide a simple connection of the CV errors to the residual sums of squares between the reconstructed and the given measurements. Second, on the basis of this finding, we analytically evaluate the CV errors when the design matrix is given as a simple random matrix in the large size limit by using the replica method. Finally, these results are compared with those of numerical simulations on finite-size systems and are confirmed to be correct. We also apply the simple formulas of the first type of CV error to an actual dataset of the supernovae.

  10. Cross-Validation Without Doing Cross-Validation in Genome-Enabled Prediction

    PubMed Central

    Gianola, Daniel; Schön, Chris-Carolin

    2016-01-01

    Cross-validation of methods is an essential component of genome-enabled prediction of complex traits. We develop formulae for computing the predictions that would be obtained when one or several cases are removed in the training process, to become members of testing sets, but by running the model using all observations only once. Prediction methods to which the developments apply include least squares, best linear unbiased prediction (BLUP) of markers, or genomic BLUP, reproducing kernels Hilbert spaces regression with single or multiple kernel matrices, and any member of a suite of linear regression methods known as “Bayesian alphabet.” The approach used for Bayesian models is based on importance sampling of posterior draws. Proof of concept is provided by applying the formulae to a wheat data set representing 599 inbred lines genotyped for 1279 markers, and the target trait was grain yield. The data set was used to evaluate predictive mean-squared error, impact of alternative layouts on maximum likelihood estimates of regularization parameters, model complexity, and residual degrees of freedom stemming from various strengths of regularization, as well as two forms of importance sampling. Our results will facilitate carrying out extensive cross-validation without model retraining for most machines employed in genome-assisted prediction of quantitative traits. PMID:27489209

  11. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  12. A K-fold Averaging Cross-validation Procedure

    PubMed Central

    Jung, Yoonsuh; Hu, Jianhua

    2015-01-01

    Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.

  13. A Cross-Validation of Paulson's Discriminant Function-Derived Scales for Identifying "At Risk" Child-Abusive Parents.

    ERIC Educational Resources Information Center

    Beal, Don; And Others

    1984-01-01

    When the six scales were cross-validated on an independent sample from the population of child-abusing parents, significant shrinkage in the accuracy of prediction was found. The use of the special subscales for identifying "at risk" parents in prenatal clinics, pediatric clinics, and mental health centers as originally suggested by Paulson and…

  14. Estimating the Coefficient of Cross-validity in Multiple Regression: A Comparison of Analytical and Empirical Methods.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1996-01-01

    The accuracy of three analytical formulas for shrinkage estimation and four empirical techniques were investigated in a Monte Carlo study of the coefficient of cross-validity in multiple regression. Substantial statistical bias was evident for all techniques except the formula of M. W. Brown (1975) and multicross-validation. (SLD)

  15. A Cross-Validation Study of the Posttraumatic Growth Inventory

    ERIC Educational Resources Information Center

    Sheikh, Alia I.; Marotta, Sylvia A.

    2005-01-01

    This article is a cross-validation of R. G. Tedeschi and L. G. Calhoun's (1996) original study of the development of the Posttraumatic Growth Inventory (PTGI). It describes several psychometric properties of scores on the PTGI in a sample of middle- to old-aged adults with a history of cardiovascular disease. The results did not support the…

  16. The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).

    ERIC Educational Resources Information Center

    Berryman, Joan D.; Neal, W. R. Jr.

    1980-01-01

    Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…

  17. Comprehensive Assessment of Emotional Disturbance: A Cross-Validation Approach

    ERIC Educational Resources Information Center

    Fisher, Emily S.; Doyon, Katie E.; Saldana, Enrique; Allen, Megan Redding

    2007-01-01

    Assessing a student for emotional disturbance is a serious and complex task given the stigma of the label and the ambiguities of the federal definition. One way that school psychologists can be more confident in their assessment results is to cross validate data from different sources using the RIOT approach (Review, Interview, Observe, Test).…

  18. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  19. The RCRAS and legal insanity: a cross-validation study.

    PubMed

    Rogers, R; Seman, W; Wasyliw, O E

    1983-07-01

    Examined the RCRAS as an empirically based approach to insanity evaluations. Previous research has been encouraging with regard to the RCRAS' interrater reliability and construct validity. The present study, with a larger data base (N = 111), sought to cross-validate these findings. Results from five forensic centers established satisfactory reliability for the RCRAS (mean kappa r = .80 for decision variables for criminal responsibility) and differentiating patterns for four of the five scales between sane and insane patient-defendants. Results further suggested that the RCRAS was generalizable across age, sex, criminal behavior, and location of the forensic evaluation. These findings were discussed with respect to the potential clinical utility of the RCRAS.

  20. Cross-validated detection of crack initiation in aerospace materials

    NASA Astrophysics Data System (ADS)

    Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios

    2014-03-01

    A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.

  1. Cross-validating the Berlin Affective Word List.

    PubMed

    Võ, Melissa L H; Jacobs, Arthur M; Conrad, Markus

    2006-11-01

    We introduce the Berlin Affective Word List (BAWL) in order to provide researchers with a German database containing both emotional valence and imageability ratings for more than 2,200 German words. The BAWL was cross-validated using a forced choice valence decision task in which two distinct valence categories (negative or positive) had to be assigned to a highly controlled selection of 360 words according to varying emotional content (negative, neutral, or positive). The reaction time (RT) results corroborated the valence categories: Words that had been rated as "neutral" in the norms yielded maximum RTs. The BAWL is intended to help researchers create stimulus materials for a wide range of experiments dealing with the emotional processing of words. PMID:17393831

  2. Cross-validating a bidimensional mathematics anxiety scale.

    PubMed

    Haiyan Bai

    2011-03-01

    The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p < .001), and positive discrimination power indicated that MAS-R is a psychometrically reliable and valid instrument for measuring mathematics anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.

  3. Rule-Out and Rule-In scales for the M test for malingering: a cross-validation.

    PubMed

    Smith, G P; Borum, R; Schinka, J A

    1993-01-01

    Previous research found the M test to have limited utility for the screening of malingering. Subsequently, Rogers et al. attempted to improve the test's discriminative ability by developing an alternative scoring procedure-Rule-In and Rule-Out scales. These scales showed promising results as a brief screener for malingering with hit rates as high as 95 percent. The present study cross-validated their proposed decision rules, but found lower rates of classification accuracy. The most conservative decision rule (i.e., to maximize detection of malingerers) only identified 72.7 percent of the malingerers with a false positive rate of 50.8 percent.

  4. Cross-validation pitfalls when selecting and assessing regression and classification models

    PubMed Central

    2014-01-01

    Background We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. Methods We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. Results We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. Conclusions We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error. PMID:24678909

  5. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  6. Splenectomy Causes 10-Fold Increased Risk of Portal Venous System Thrombosis in Liver Cirrhosis Patients

    PubMed Central

    Qi, Xingshun; Han, Guohong; Ye, Chun; Zhang, Yongguo; Dai, Junna; Peng, Ying; Deng, Han; Li, Jing; Hou, Feifei; Ning, Zheng; Zhao, Jiancheng; Zhang, Xintong; Wang, Ran; Guo, Xiaozhong

    2016-01-01

    Background Portal venous system thrombosis (PVST) is a life-threatening complication of liver cirrhosis. We conducted a retrospective study to comprehensively analyze the prevalence and risk factors of PVST in liver cirrhosis. Material/Methods All cirrhotic patients without malignancy admitted between June 2012 and December 2013 were eligible if they underwent contrast-enhanced CT or MRI scans. Independent predictors of PVST in liver cirrhosis were calculated in multivariate analyses. Subgroup analyses were performed according to the severity of PVST (any PVST, main portal vein [MPV] thrombosis >50%, and clinically significant PVST) and splenectomy. Odds ratios (ORs) and 95% confidence intervals (CIs) were reported. Results Overall, 113 cirrhotic patients were enrolled. The prevalence of PVST was 16.8% (19/113). Splenectomy (any PVST: OR=11.494, 95%CI=2.152–61.395; MPV thrombosis >50%: OR=29.987, 95%CI=3.247–276.949; clinically significant PVST: OR=40.415, 95%CI=3.895–419.295) and higher hemoglobin (any PVST: OR=0.974, 95%CI=0.953–0.996; MPV thrombosis >50%: OR=0.936, 95%CI=0.895–0.980; clinically significant PVST: OR=0.935, 95%CI=0.891–0.982) were the independent predictors of PVST. The prevalence of PVST was 13.3% (14/105) after excluding splenectomy. Higher hemoglobin was the only independent predictor of MPV thrombosis >50% (OR=0.952, 95%CI=0.909–0.997). No independent predictors of any PVST or clinically significant PVST were identified in multivariate analyses. Additionally, PVST patients who underwent splenectomy had a significantly higher proportion of clinically significant PVST but lower MELD score than those who did not undergo splenectomy. In all analyses, the in-hospital mortality was not significantly different between cirrhotic patient with and without PVST. Conclusions Splenectomy may increase by at least 10-fold the risk of PVST in liver cirrhosis independent of severity of liver dysfunction. PMID:27432511

  7. Cross-validation of resting metabolic rate prediction equations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: Knowledge of the resting metabolic rate (RMR) is necessary for determining individual total energy requirements. Measurement of RMR is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, the accuracy of these equations...

  8. Comparison of cross-validation and bootstrap aggregating for building a seasonal streamflow forecast model

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2016-10-01

    Based on a hindcast experiment for the period 1982-2013 in 66 sub-catchments of the Swiss Rhine, the present study compares two approaches of building a regression model for seasonal streamflow forecasting. The first approach selects a single "best guess" model, which is tested by leave-one-out cross-validation. The second approach implements the idea of bootstrap aggregating, where bootstrap replicates are employed to select several models, and out-of-bag predictions provide model testing. The target value is mean streamflow for durations of 30, 60 and 90 days, starting with the 1st and 16th day of every month. Compared to the best guess model, bootstrap aggregating reduces the mean squared error of the streamflow forecast by seven percent on average. Thus, if resampling is anyway part of the model building procedure, bootstrap aggregating seems to be a useful strategy in statistical seasonal streamflow forecasting. Since the improved accuracy comes at the cost of a less interpretable model, the approach might be best suited for pure prediction tasks, e.g. as in operational applications.

  9. Cross-validation of a Shortened Battery for the Assessment of Dysexecutive Disorders in Alzheimer Disease.

    PubMed

    Godefroy, Olivier; Martinaud, Olivier; Verny, Marc; Mosca, Chrystèle; Lenoir, Hermine; Bretault, Eric; Devendeville, Agnès; Diouf, Momar; Pere, Jean-Jacques; Bakchine, Serge; Delabrousse-Mayoux, Jean-Philippe; Roussel, Martine

    2016-01-01

    The frequency of executive disorders in mild-to-moderate Alzheimer disease (AD) has been demonstrated by the application of a comprehensive battery. The present study analyzed data from 2 recent multicenter studies based on the same executive battery. The objective was to derive a shortened battery by using the GREFEX population as a training dataset and by cross-validating the results in the REFLEX population. A total of 102 AD patients of the GREFEX study (MMSE=23.2±2.9) and 72 patients of the REFLEX study (MMSE=20.8±3.5) were included. Tests were selected and receiver operating characteristic curves were generated relative to the performance of 780 controls from the GREFEX study. Stepwise logistic regression identified 3 cognitive tests (Six Elements Task, categorical fluency and Trail Making Test B error) and behavioral disorders globally referred as global hypoactivity (P=0.0001, all). This shortened battery was as accurate as the entire GREFEX battery in diagnosing dysexecutive disorders in both training group and the validation group. Bootstrap procedure confirmed the stability of AUC. A shortened battery based on 3 cognitive tests and 3 behavioral domains provides a high diagnosis accuracy of executive disorders in mild-to-moderate AD.

  10. Development and cross-validation of prediction equations for estimating resting energy expenditure in severely obese Caucasian children and adolescents.

    PubMed

    Lazzer, Stefano; Agosti, Fiorenza; De Col, Alessandra; Sartorio, Alessandro

    2006-11-01

    The objectives of the present study were to develop and cross-validate new equations for predicting resting energy expenditure (REE) in severely obese children and adolescents, and to determine the accuracy of new equations using the Bland-Altman method. The subjects of the study were 574 obese Caucasian children and adolescents (mean BMI z-score 3.3). REE was determined by indirect calorimetry and body composition by bioelectrical impedance analysis. Equations were derived by stepwise multiple regression analysis using a calibration cohort of 287 subjects and the equations were cross-validated in the remaining 287 subjects. Two new specific equations based on anthropometric parameters were generated as follows: (1) REE=(Sex x 892.68)-(Age x 115.93)+(Weight x 54.96)+(Stature x 1816.23)+1484.50 (R(2) 0.66; se 1028.97 kJ); (2) REE=(Sex x 909.12)-(Age x 107.48)+(fat-free mass x 68.39)+(fat mass x 55.19)+3631.23 (R(2) 0.66; se 1034.28 kJ). In the cross-validation group, mean predicted REE values were not significantly different from the mean measured REE for all children and adolescents, as well as for boys and for girls (difference <2 %) and the limits of agreement (+/-2 sd) were +2.06 and -1.77 MJ/d (NS). The new prediction equations allow an accurate estimation of REE in groups of severely obese children and adolescents. These equations might be useful for health care professionals and researchers when estimating REE in severely obese children and adolescents. PMID:17092390

  11. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  12. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  13. Airborne environmental endotoxin: a cross-validation of sampling and analysis techniques.

    PubMed Central

    Walters, M; Milton, D; Larsson, L; Ford, T

    1994-01-01

    A standard method for measurement of airborne environmental endotoxin was developed and field tested in a fiberglass insulation-manufacturing facility. This method involved sampling with a capillary-pore membrane filter, extraction in buffer using a sonication bath, and analysis by the kinetic-Limulus assay with resistant-parallel-line estimation (KLARE). Cross-validation of the extraction and assay method was performed by comparison with methanolysis of samples followed by 3-hydroxy fatty acid (3-OHFA) analysis by gas chromatography-mass spectrometry. Direct methanolysis of filter samples and methanolysis of buffer extracts of the filters yielded similar 3-OHFA content (P = 0.72); the average difference was 2.1%. Analysis of buffer extracts for endotoxin content by the KLARE method and by gas chromatography-mass spectrometry for 3-OHFA content produced similar results (P = 0.23); the average difference was 0.88%. The source of endotoxin was gram-negative bacteria growing in recycled washwater used to clean the insulation-manufacturing equipment. The endotoxin and bacteria become airborne during spray cleaning operations. The types of 3-OHFAs in bacteria cultured from the washwater, present in the washwater and in the air, were similar. Virtually all of the bacteria cultured from air and water were gram negative composed mostly of two species, Deleya aesta and Acinetobacter johnsonii. Airborne countable bacteria correlated well with endotoxin (r2 = 0.64). Replicate sampling showed that results with the standard sampling, extraction, and Limulus assay by the KLARE method were highly reproducible (95% confidence interval for endotoxin measurement +/- 0.28 log10). These results demonstrate the accuracy, precision, and sensitivity of the standard procedure proposed for airborne environmental endotoxin. PMID:8161191

  14. Cross-validation of component models: a critical look at current methods.

    PubMed

    Bro, R; Kjeldahl, K; Smilde, A K; Kiers, H A L

    2008-03-01

    In regression, cross-validation is an effective and popular approach that is used to decide, for example, the number of underlying features, and to estimate the average prediction error. The basic principle of cross-validation is to leave out part of the data, build a model, and then predict the left-out samples. While such an approach can also be envisioned for component models such as principal component analysis (PCA), most current implementations do not comply with the essential requirement that the predictions should be independent of the entity being predicted. Further, these methods have not been properly reviewed in the literature. In this paper, we review the most commonly used generic PCA cross-validation schemes and assess how well they work in various scenarios.

  15. The Employability of Psychologists in Academic Settings: A Cross-Validation.

    ERIC Educational Resources Information Center

    Quereshi, M. Y.

    1983-01-01

    Analyzed the curriculum vitae (CV) of 117 applicants for the position of assistant professor of psychology to yield four cross-validated factors. Comparisons of the results with those of four years ago indicated considerable stability of the factors. Scholarly publications remain an important factor. (JAC)

  16. A Cross-Validation Study of Police Recruit Performance as Predicted by the IPI and MMPI.

    ERIC Educational Resources Information Center

    Shusman, Elizabeth J.; And Others

    Validation and cross-validation studies were conducted using the Minnesota Multiphasic Personality Inventory (MMPI) and Inwald Personality Inventory (IPI) to predict job performance for 698 urban male police officers who completed a six-month training academy. Job performance criteria evaluated included absence, lateness, derelictions, negative…

  17. Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong

    ERIC Educational Resources Information Center

    Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger

    2016-01-01

    Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…

  18. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  19. A New Symptom Model for Autism Cross-Validated in an Independent Sample

    ERIC Educational Resources Information Center

    Boomsma, A.; Van Lang, N. D. J.; De Jonge, M. V.; De Bildt, A. A.; Van Engeland, H.; Minderaa, R. B.

    2008-01-01

    Background: Results from several studies indicated that a symptom model other than the DSM triad might better describe symptom domains of autism. The present study focused on a) investigating the stability of a new symptom model for autism by cross-validating it in an independent sample and b) examining the invariance of the model regarding three…

  20. Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth

    ERIC Educational Resources Information Center

    Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…

  1. Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

    ERIC Educational Resources Information Center

    Acar, Tu¨lin

    2014-01-01

    In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…

  2. Learning Disabilities Found in Association with French Immersion Programming: A Cross Validation.

    ERIC Educational Resources Information Center

    Trites, R. L.; Price, M. A.

    In the first study of this series, it was found that children who have difficulty in primary French immersion are distinct from children having a primary reading disability, minimal brain dysfunction, hyperactivity or primary emotional disturbance. The present study was undertaken in order to cross-validate the findings of the first study, to…

  3. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  4. Validity and Cross-Validity of Metric and Nonmetric Multiple Regression.

    ERIC Educational Resources Information Center

    MacCallum, Robert C.; And Others

    1979-01-01

    Questions are raised concerning differences between traditional metric multiple regression, which assumes all variables to be measured on interval scales, and nonmetric multiple regression. The ordinal model is generally superior in fitting derivation samples but the metric technique fits better than the nonmetric in cross-validation samples.…

  5. Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…

  6. Methodology Review: Estimation of Population Validity and Cross-Validity, and the Use of Equal Weights in Prediction.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1997-01-01

    This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)

  7. Reliable Digit Span: a systematic review and cross-validation study.

    PubMed

    Schroeder, Ryan W; Twumasi-Ankrah, Philip; Baade, Lyle E; Marshall, Paul S

    2012-03-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these limitations, thus increasing the measure's clinical utility. Sensitivity and specificity rates were calculated for the ≤6 and ≤7 cutoffs when data were globally combined and divided by clinical groups. The cross-validation of specific diagnostic groups was consistent with the data reported in the literature. Overall, caution should be used when utilizing the ≤7 cutoff in all clinical groups and when utilizing the ≤6 cutoff in the following groups: cerebrovascular accident, severe memory disorders, mental retardation, borderline intellectual functioning, and English as a second language. Additional limitations and cautions are provided.

  8. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  9. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  10. The German Bight: Preparing for Sentinel-3 wit a Cross Validation of SAR and PLRM CryoSat-2 Altimeter Data

    NASA Astrophysics Data System (ADS)

    Fenoglio-Marc, L.; Buchhaupt, C.; Dinardo, S.; Scharroo, R.; Benveniste, J.; Becker, M.

    2015-12-01

    As preparatory work for Sentinel-3, we retrieve the three geophysical parameters: sea surface height (SSH), significant wave height (SWH) and wind speed at 10 meters height (U10) from CryoSat-2 data in our validation region in North Sea. The CryoSat-2 SAR echoes are processed with a coherent and an incoherent processing to generate SAR and PLRM data respectively. We derive precision and accuracy at 1 Hz in open ocean, at distances larger than 10 kilometres from the coast. A cross-validation of the SAR and PLRM altimeter data is performed to investigate the differences between the products. Look Up Tables (LUT) are applied in both schemes to correct for approximations applied in both retracking procedures. Additionally a numerical retracker is used in PLRM. The results are validated against in-situ and model data. The analysis is performed for a period of four years, from July 2010 to May 2014. The regional cross-validation analysis confirms the good consistency between PLRM and SAR data. Using LUT the agreement for the sea wave heights increases by 10%.

  11. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2016-05-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  12. Variational cross-validation of slow dynamical modes in molecular kinetics

    PubMed Central

    Pande, Vijay S.

    2015-01-01

    Markov state models are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long-timescale statistical kinetics and slow dynamical modes of biomolecular systems. However, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non-experts applying these methods to novel biological systems. Here, we consider cross-validation with a new objective function for estimators of these slow dynamical modes, a generalized matrix Rayleigh quotient (GMRQ), which measures the ability of a rank-m projection operator to capture the slow subspace of the system. It is shown that a variational theorem bounds the GMRQ from above by the sum of the first m eigenvalues of the system’s propagator, but that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. This overfitting can be detected and avoided through cross-validation. These result make it possible to construct Markov state models for protein dynamics in a way that appropriately captures the tradeoff between systematic and statistical errors. PMID:25833563

  13. Cross Validation for Selection of Cortical Interaction Models From Scalp EEG or MEG

    PubMed Central

    Cheung, Bing Leung Patrick; Nowak, Robert; Lee, Hyong Chol; van Drongelen, Wim; Van Veen, Barry D.

    2012-01-01

    A cross-validation (CV) method based on state-space framework is introduced for comparing the fidelity of different cortical interaction models to the measured scalp electroencephalogram (EEG) or magnetoencephalography (MEG) data being modeled. A state equation models the cortical interaction dynamics and an observation equation represents the scalp measurement of cortical activity and noise. The measured data are partitioned into training and test sets. The training set is used to estimate model parameters and the model quality is evaluated by computing test data innovations for the estimated model. Two CV metrics normalized mean square error and log-likelihood are estimated by averaging over different training/test partitions of the data. The effectiveness of this method of model selection is illustrated by comparing two linear modeling methods and two nonlinear modeling methods on simulated EEG data derived using both known dynamic systems and measured electrocorticography data from an epilepsy patient. PMID:22084038

  14. Error criteria for cross validation in the context of chaotic time series prediction.

    PubMed

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  15. A cross-validation of two differing measures of hypnotic depth.

    PubMed

    Pekala, Ronald J; Maurer, Ronald L

    2013-01-01

    Several sets of regression analyses were completed, attempting to predict 2 measures of hypnotic depth: the self-reported hypnotic depth score and hypnoidal state score from variables of the Phenomenology of Consciousness Inventory: Hypnotic Assessment Procedure (PCI-HAP). When attempting to predict self-reported hypnotic depth, an R of .78 with Study 1 participants shrank to an r of .72 with Study 2 participants, suggesting mild shrinkage for this more attributional measure of hypnotic depth. Attempting to predict hypnoidal state (an estimate of trance) using the same procedure, yielded an R of .56, that upon cross-validation shrank to an r of .48. These and other results suggest that, although there is some variance in common, the self-reported hypnotic depth score appears to be tapping a different construct from the hypnoidal state score.

  16. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  17. Fit-for-purpose bioanalytical cross-validation for LC-MS/MS assays in clinical studies.

    PubMed

    Xu, Xiaohui; Ji, Qin C; Jemal, Mohammed; Gleason, Carol; Shen, Jim X; Stouffer, Bruce; Arnold, Mark E

    2013-01-01

    The paradigm shift of globalized research and conducting clinical studies at different geographic locations worldwide to access broader patient populations has resulted in increased need of correlating bioanalytical results generated in multiple laboratories, often across national borders. Cross-validations of bioanalytical methods are often implemented to assure the equivalency of the bioanalytical results is demonstrated. Regulatory agencies, such as the US FDA and European Medicines Agency, have included the requirement of cross-validations in their respective bioanalytical validation guidance and guidelines. While those documents provide high-level expectations, the detailed implementation is at the discretion of each individual organization. At Bristol-Myers Squibb, we practice a fit-for-purpose approach for conducting cross-validations for small-molecule bioanalytical methods using LC-MS/MS. A step-by-step proposal on the overall strategy, procedures and technical details for conducting a successful cross-validation is presented herein. A case study utilizing the proposed cross-validation approach to rule out method variability as the potential cause for high variance observed in PK studies is also presented. PMID:23256474

  18. Cross-validation of the factor structure of the Aberrant Behavior Checklist for persons with mental retardation.

    PubMed

    Bihm, E M; Poindexter, A R

    1991-09-01

    The original factor structure of the Aberrant Behavior Checklist was cross-validated with an American sample of 470 persons with moderate to profound mental retardation, including nonambulatory individuals. The results of the factor analysis with varimax rotation essentially replicated previous findings, suggesting that the original five factors (Irritability, Lethargy, Stereotypic Behavior, Hyperactivity, and Inappropriate Speech) could be cross-validated by factor loadings of individual items. The original five scales continue to show high internal consistency. These factors are easily interpretable and should continue to provide valuable research and clinical information.

  19. Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

    PubMed

    Reiss, Philip T

    2015-08-01

    The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs.

  20. A leave-one-out cross-validation SAS macro for the identification of markers associated with survival.

    PubMed

    Rushing, Christel; Bulusu, Anuradha; Hurwitz, Herbert I; Nixon, Andrew B; Pang, Herbert

    2015-02-01

    A proper internal validation is necessary for the development of a reliable and reproducible prognostic model for external validation. Variable selection is an important step for building prognostic models. However, not many existing approaches couple the ability to specify the number of covariates in the model with a cross-validation algorithm. We describe a user-friendly SAS macro that implements a score selection method and a leave-one-out cross-validation approach. We discuss the method and applications behind this algorithm, as well as details of the SAS macro.

  1. Cross-validation of recent and longstanding resting metabolic rate prediction equations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...

  2. Cross-validation of the PAI Negative Distortion Scale for feigned mental disorders: a research report.

    PubMed

    Rogers, Richard; Gillard, Nathan D; Wooley, Chelsea N; Kelsey, Katherine R

    2013-02-01

    A major strength of the Personality Assessment Inventory (PAI) is its systematic assessment of response styles, including feigned mental disorders. Recently, Mogge, Lepage, Bell, and Ragatz developed and provided the initial validation for the Negative Distortion Scale (NDS). Using rare symptoms as its detection strategy for feigning, the usefulness of NDS was examined via a known-groups comparison. The current study sought to cross-validate the NDS by implementing a between-subjects simulation design. Simulators were asked to feign total disability in an effort to secure unwarranted compensation from their insurance company. Even in an inpatient sample with severe Axis I disorders and concomitant impairment, the NDS proved effective as a rare-symptom strategy with low levels of item endorsement that remained mostly stable across genders. For construct validity, the NDS was moderately correlated with the Structured Interview of Reported Symptoms-Second Edition and other PAI feigning scales. For discriminant validity, it yielded a very large effect size (d = 1.81), surpassing the standard PAI feigning indicators. Utility estimates appeared to be promising for both ruling-out (low probability of feigning) and ruling-in (high probability of feigning) determinations at different base rates. Like earlier research, the data supported the creation of well-defined groups with indeterminate scores (i.e., the cut score ± 1 SEM) removed to avoid high rates of misclassifications for this narrow band.

  3. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  4. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  5. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  6. Enhancement of light propagation depth in skin: cross-validation of mathematical modeling methods.

    PubMed

    Kwon, Kiwoon; Son, Taeyoon; Lee, Kyoung-Joung; Jung, Byungjo

    2009-07-01

    Various techniques to enhance light propagation in skin have been studied in low-level laser therapy. In this study, three mathematical modeling methods for five selected techniques were implemented so that we could understand the mechanisms that enhance light propagation in skin. The five techniques included the increasing of the power and diameter of a laser beam, the application of a hyperosmotic chemical agent (HCA), and the whole and partial compression of the skin surface. The photon density profile of the five techniques was solved with three mathematical modeling methods: the finite element method (FEM), the Monte Carlo method (MCM), and the analytic solution method (ASM). We cross-validated the three mathematical modeling results by comparing photon density profiles and analyzing modeling error. The mathematical modeling results verified that the penetration depth of light can be enhanced if incident beam power and diameter, amount of HCA, or whole and partial skin compression is increased. In this study, light with wavelengths of 377 nm, 577 nm, and 633 nm was used.

  7. Automatic extraction of mutations from Medline and cross-validation with OMIM.

    PubMed

    Rebholz-Schuhmann, Dietrich; Marcel, Stephane; Albert, Sylvie; Tolle, Ralf; Casari, Georg; Kirsch, Harald

    2004-01-01

    Mutations help us to understand the molecular origins of diseases. Researchers, therefore, both publish and seek disease-relevant mutations in public databases and in scientific literature, e.g. Medline. The retrieval tends to be time-consuming and incomplete. Automated screening of the literature is more efficient. We developed extraction methods (called MEMA) that scan Medline abstracts for mutations. MEMA identified 24,351 singleton mutations in conjunction with a HUGO gene name out of 16,728 abstracts. From a sample of 100 abstracts we estimated the recall for the identification of mutation-gene pairs to 35% at a precision of 93%. Recall for the mutation detection alone was >67% with a precision rate of >96%. This shows that our system produces reliable data. The subset consisting of protein sequence mutations (PSMs) from MEMA was compared to the entries in OMIM (20,503 entries versus 6699, respectively). We found 1826 PSM-gene pairs to be in common to both datasets (cross-validated). This is 27% of all PSM-gene pairs in OMIM and 91% of those pairs from OMIM which co-occur in at least one Medline abstract. We conclude that Medline covers a large portion of the mutations known to OMIM. Another large portion could be artificially produced mutations from mutagenesis experiments. Access to the database of extracted mutation-gene pairs is available through the web pages of the EBI (refer to http://www.ebi. ac.uk/rebholz/index.html).

  8. Cross-Validation of the Factor Structure of the Aberrant Behavior Checklist for Persons with Mental Retardation.

    ERIC Educational Resources Information Center

    Bihm, Elson M.; Poindexter, Ann R.

    1991-01-01

    The original factor structure of the Aberrant Behavior Checklist was cross-validated with a U.S. sample of 470 persons with moderate to profound mental retardation (27 percent nonambulatory). Results replicated previous findings, suggesting that the original five factors (irritability, lethargy, stereotypic behavior, hyperactivity, and…

  9. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  10. A Large-Scale Empirical Evaluation of Cross-Validation and External Test Set Validation in (Q)SAR.

    PubMed

    Gütlein, Martin; Helma, Christoph; Karwath, Andreas; Kramer, Stefan

    2013-06-01

    (Q)SAR model validation is essential to ensure the quality of inferred models and to indicate future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to accept the (Q)SAR model, and to approve its use in real world scenarios as alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model, in particular whether to employ variants of cross-validation or external test set validation, is still under discussion. In this paper, we empirically compare a k-fold cross-validation with external test set validation. To this end we introduce a workflow allowing to realistically simulate the common problem setting of building predictive models for relatively small datasets. The workflow allows to apply the built and validated models on large amounts of unseen data, and to compare the performance of the different validation approaches. The experimental results indicate that cross-validation produces higher performant (Q)SAR models than external test set validation, reduces the variance of the results, while at the same time underestimates the performance on unseen compounds. The experimental results reported in this paper suggest that, contrary to current conception in the community, cross-validation may play a significant role in evaluating the predictivity of (Q)SAR models.

  11. Cross-Validational Studies of the Personality Correlates of the A-B Therapist "Type" Distinction Among Professionals and Nonprofessionals

    ERIC Educational Resources Information Center

    Berzins, Juris I.; And Others

    1972-01-01

    Research with the A-B therapist type'' variable has included many analogue studies in which A and B undergraduates have been assumed to be personologically similar to A and B professionals. The present study cross-validated the personality correlates of A-B status across five new samples. (Author)

  12. Short Forms of the Wechsler Memory Scale--Revised: Cross- Validation and Derivation of a Two-Subtest Form.

    ERIC Educational Resources Information Center

    van den Broek, Anneke; Golden, Charles J.; Loonstra, Ann; Ghinglia, Katheryne; Goldstein, Diane

    1998-01-01

    Indicated excellent cross-validations with correlation of 0.99 for past formulas (J. L. Woodard and B. N. Axelrod, 1995; B. N. Axelrod et al, 1996) for estimating the Wechsler Memory Scale- Revised General Memory and Delayed Recall Indexes. Over 85% of the estimated scores were within 10 points of actual scores. Age, education, diagnosis, and IQ…

  13. Cross-validation of a composite pain scale for preschool children within 24 hours of surgery.

    PubMed

    Suraseranivongse, S; Santawat, U; Kraiprasit, K; Petcharatana, S; Prakkamodom, S; Muntraporn, N

    2001-09-01

    This study was designed to cross-validate a composite measure of the pain scales CHEOPS (Children's Hospital of Eastern Ontario Pain Scale), OPS (Objective Pain Scale, simplified for parent use by replacing blood pressure measurement with observation of body language or posture), TPPPS (Toddler Preschool Postoperative Pain Scale) and FLACC (Face, Legs, Activity, Cry, Consolability) in 167 Thai children aged 1-5.5 yr. The pain scales were translated and tested for content, construct and concurrent validity, including inter-rater and intra-rater reliabilities. Discriminative validity in immediate and persistent pain for the age groups < or =3 and >3 yr were also studied. The children's behaviour was videotaped before and after surgery, before analgesia had been given in the post-anaesthesia care unit (PACU), and on the ward. Four observers then rated pain behaviour from rearranged videotapes. The decision to treat pain was based on routine practice and was made by a researcher unaware of the rating procedure. All tools had acceptable content validity and excellent inter-rater and intra-rater reliabilities (intraclass correlation >0.9 and >0.8 respectively). Construct validity was determined by the ability to differentiate the group with no pain before surgery and a high pain level after surgery, before analgesia (P<0.001). The positive correlations among all scales in the PACU and on the ward (r=0.621-0.827, P<0.0001) supported concurrent validity. Use of the kappa statistic indicated that CHEOPS yielded the best agreement with the routine decision to treat pain. The younger and older age groups both yielded very good agreement in the PACU but only moderate agreement on the ward. On the basis of data from this study, we recommend CHEOPS as a valid, reliable and practical tool. PMID:11517123

  14. The generalized cross-validation method applied to geophysical linear traveltime tomography

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Oliveira, N. P.

    2009-12-01

    The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.

  15. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  16. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  17. Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural network.

    PubMed

    Faraggi, Eshel; Xue, Bin; Zhou, Yaoqi

    2009-03-01

    This article attempts to increase the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins through improved learning. Most methods developed for improving the backpropagation algorithm of artificial neural networks are limited to small neural networks. Here, we introduce a guided-learning method suitable for networks of any size. The method employs a part of the weights for guiding and the other part for training and optimization. We demonstrate this technique by predicting residue solvent accessibility and real-value backbone torsion angles of proteins. In this application, the guiding factor is designed to satisfy the intuitive condition that for most residues, the contribution of a residue to the structural properties of another residue is smaller for greater separation in the protein-sequence distance between the two residues. We show that the guided-learning method makes a 2-4% reduction in 10-fold cross-validated mean absolute errors (MAE) for predicting residue solvent accessibility and backbone torsion angles, regardless of the size of database, the number of hidden layers and the size of input windows. This together with introduction of two-layer neural network with a bipolar activation function leads to a new method that has a MAE of 0.11 for residue solvent accessibility, 36 degrees for psi, and 22 degrees for phi. The method is available as a Real-SPINE 3.0 server in http://sparks.informatics.iupui.edu.

  18. Long-term Cross-validation of Everolimus Therapeutic Drug Monitoring Assays: The Zortracker Study

    PubMed Central

    Schniedewind, B; Niederlechner, S; Galinkin, JL; Johnson-Davis, KL; Christians, U; Meyer, EJ

    2015-01-01

    Background This ongoing academic collaboration was initiated for providing support to set up, validate, and maintain everolimus therapeutic drug monitoring (TDM) assays and to study long-term inter- laboratory performance. Methods This study was based on EDTA whole blood samples collected from transplant patients treated with everolimus in a prospective clinical trial. Samples were handled under controlled conditions during collection, storage, and were shipped on dry ice to minimize freeze-thaw cycles. For more than 1.5 years participating laboratories received a set of 3 blinded samples on a monthly basis. Among others, these samples included individual patient samples, patient sample pools to assess long-term performance and patient samples pools enriched with isolated everolimus metabolites. Results The results between LC-MS/MS and the everolimus Quantitative Microsphere System (QMS, Thermo Fisher) assay were comparable. The monthly inter-laboratory variability (CV%) for cross validation samples ranged from 6.5 – 23.2% (average of 14.8%) for LC-MS/MS and 4.2 – 26.4% (average of 11.1%) for laboratories using the QMS assay. A blinded long-term pool sample was sent to the laboratories for 13 months. The result was 5.31 ± 0.86 ng/mL (range 2.9–7.8 ng/mL) for the LC-MS/MS and 5.20 ± 0.54 ng/mL (range 4.0–6.8 ng/mL) for QMS laboratories. Conclusions Enrichment of patient sample pools with 5–25 ng/mL of purified everolimus metabolites (46-hydroxy everolimus and 39-O-desmethyl everolimus) did not affect the results of either LC-MS/MS or QMS assays. Both LC-MS/MS and QMS assays gave similar results and showed similar performance, albeit with a trend towards higher inter-laboratory variability among laboratories using LC-MS/MS than the QMS assay. PMID:25970506

  19. Calibration and Cross-Validation of the ActiGraph wGT3X+ Accelerometer for the Estimation of Physical Activity Intensity in Children with Intellectual Disabilities

    PubMed Central

    McGarty, Arlene M.; Penpraze, Victoria; Melville, Craig A.

    2016-01-01

    Background Valid objective measurement is integral to increasing our understanding of physical activity and sedentary behaviours. However, no population-specific cut points have been calibrated for children with intellectual disabilities. Therefore, this study aimed to calibrate and cross-validate the first population-specific accelerometer intensity cut points for children with intellectual disabilities. Methods Fifty children with intellectual disabilities were randomly assigned to the calibration (n = 36; boys = 28, 9.53±1.08yrs) or cross-validation (n = 14; boys = 9, 9.57±1.16yrs) group. Participants completed a semi-structured school-based activity session, which included various activities ranging from sedentary to vigorous intensity. Direct observation (SOFIT tool) was used to calibrate the ActiGraph wGT3X+, which participants wore on the right hip. Receiver Operating Characteristic curve analyses determined the optimal cut points for sedentary, moderate, and vigorous intensity activity for the vertical axis and vector magnitude. Classification agreement was investigated using sensitivity, specificity, total agreement, and Cohen’s kappa scores against the criterion measure of SOFIT. Results The optimal (AUC = .87−.94) vertical axis cut points (cpm) were ≤507 (sedentary), 1008−2300 (moderate), and ≥2301 (vigorous), which demonstrated high sensitivity (81−88%) and specificity (81−85%). The optimal (AUC = .86−.92) vector magnitude cut points (cpm) of ≤1863 (sedentary), 2610−4214 (moderate), and ≥4215 (vigorous) demonstrated comparable, albeit marginally lower, accuracy than the vertical axis cut points (sensitivity = 80−86%; specificity = 77−82%). Classification agreement ranged from moderate to almost perfect (κ = .51−.85) with high sensitivity and specificity, and confirmed the trend that accuracy increased with intensity, and vertical axis cut points provide higher classification agreement than vector magnitude cut points

  20. Knowledge discovery by accuracy maximization.

    PubMed

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-04-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.

  1. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  2. Cross-validation of a mass spectrometric-based method for the therapeutic drug monitoring of irinotecan: implementation of matrix-assisted laser desorption/ionization mass spectrometry in pharmacokinetic measurements.

    PubMed

    Calandra, Eleonora; Posocco, Bianca; Crotti, Sara; Marangon, Elena; Giodini, Luciana; Nitti, Donato; Toffoli, Giuseppe; Traldi, Pietro; Agostini, Marco

    2016-07-01

    Irinotecan is a widely used antineoplastic drug, mostly employed for the treatment of colorectal cancer. This drug is a feasible candidate for therapeutic drug monitoring due to the presence of a wide inter-individual variability in the pharmacokinetic and pharmacodynamic parameters. In order to determine the drug concentration during the administration protocol, we developed a quantitative MALDI-MS method using CHCA as MALDI matrix. Here, we demonstrate that MALDI-TOF can be applied in a routine setting for therapeutic drug monitoring in humans offering quick and accurate results. To reach this aim, we cross validated, according to FDA and EMA guidelines, the MALDI-TOF method in comparison with a standard LC-MS/MS method, applying it for the quantification of 108 patients' plasma samples from a clinical trial. Standard curves for irinotecan were linear (R (2) ≥ 0.9842) over the concentration ranges between 300 and 10,000 ng/mL and showed good back-calculated accuracy and precision. Intra- and inter-day precision and accuracy, determined on three quality control levels were always <12.8 % and between 90.1 and 106.9 %, respectively. The cross-validation procedure showed a good reproducibility between the two methods, the percentage differences within 20 % in more than 70 % of the total amount of clinical samples analysed. PMID:27235158

  3. Quantification of rainfall prediction uncertainties using a cross-validation based technique. Methodology description and experimental validation.

    NASA Astrophysics Data System (ADS)

    Fraga, Ignacio; Cea, Luis; Puertas, Jerónimo; Salsón, Santiago; Petazzi, Alberto

    2016-04-01

    In this paper we present a new methodology to compute rainfall fields including the quantification of predictions uncertainties using raingauge network data. The proposed methodology comprises two steps. Firstly, the ordinary krigging technique is used to determine the estimated rainfall depth in every point of the study area. Then multiple equi-probable errors fields, which comprise both interpolation and measuring uncertainties, are added to the krigged field resulting in multiple rainfall predictions. To compute these error fields first the standard deviation of the krigging estimation is determined following the cross-validation based procedure described in Delrieu et al. (2014). Then, the standard deviation field is sampled using non-conditioned Gaussian random fields. The proposed methodology was applied to study 7 rain events in a 60x60 km area of the west coast of Galicia, in the Northwest of Spain. Due to its location at the junction between tropical and polar regions, the study area suffers from frequent intense rainfalls characterized by a great variability in terms of both space and time. Rainfall data from the tipping bucket raingauge network operated by MeteoGalicia were used to estimate the rainfall fields using the proposed methodology. The obtained predictions were then validated using rainfall data from 3 additional rain gauges installed within the CAPRI project (Probabilistic flood prediction with high resolution hydrologic models from radar rainfall estimates, funded by the Spanish Ministry of Economy and Competitiveness. Reference CGL2013-46245-R.). Results show that both the mean hyetographs and the peak intensities are correctly predicted. The computed hyetographs present a good fit to the experimental data and most of the measured values fall within the 95% confidence intervals. Also, most of the experimental values outside the confidence bounds correspond to time periods of low rainfall depths, where the inaccuracy of the measuring devices

  4. Cross-validation of species distribution models: removing spatial sorting bias and calibration with a null model.

    PubMed

    Hijmans, Robert J

    2012-03-01

    Species distribution models are usually evaluated with cross-validation. In this procedure evaluation statistics are computed from model predictions for sites of presence and absence that were not used to train (fit) the model. Using data for 226 species, from six regions, and two species distribution modeling algorithms (Bioclim and MaxEnt), I show that this procedure is highly sensitive to "spatial sorting bias": the difference between the geographic distance from testing-presence to training-presence sites and the geographic distance from testing-absence (or testing-background) to training-presence sites. I propose the use of pairwise distance sampling to remove this bias, and the use of a null model that only considers the geographic distance to training sites to calibrate cross-validation results for remaining bias. Model evaluation results (AUC) were strongly inflated: the null model performed better than MaxEnt for 45% and better than Bioclim for 67% of the species. Spatial sorting bias and area under the receiver-operator curve (AUC) values increased when using partitioned presence data and random-absence data instead of independently obtained presence-absence testing data from systematic surveys. Pairwise distance sampling removed spatial sorting bias, yielding null models with an AUC close to 0.5, such that AUC was the same as null model calibrated AUC (cAUC). This adjustment strongly decreased AUC values and changed the ranking among species. Cross-validation results for different species are only comparable after removal of spatial sorting bias and/or calibration with an appropriate null model.

  5. HR-MAS NMR Tissue Metabolomic Signatures Cross-Validated by Mass Spectrometry Distinguish Bladder Cancer from Benign Disease

    PubMed Central

    Tripathi, Pratima; Somashekar, Bagganahalli S; Ponnusamy, M.; Gursky, Amy; Dailey, Stephen; Kunju, Priya; Lee, Cheryl T.; Chinnaiyan, Arul M.; Rajendiran, Thekkelnaycke M.; Ramamoorthy, Ayyalusamy

    2013-01-01

    Effective diagnosis and surveillance of Bladder Cancer (BCa) is currently challenged by detection methods that are of poor sensitivity, particularly for low-grade tumors, resulting in unnecessary invasive procedures and economic burden. We performed HR-MAS NMR-based global metabolomic profiling and applied unsupervised principal component analysis (PCA) and hierarchical clustering performed on NMR dataset of bladder derived tissues and identified metabolic signatures that differentiate BCa from benign disease. A partial least-square discriminant analysis (PLS-DA) model (leave-one-out cross-validation) was used as diagnostic model to distinguish benign and BCa tissues. Receiver operating characteristic curve generated either from PC1 loadings of PCA or from predicted Y-values resulted in an area under curve of 0.97. Relative quantification of more than fifteen tissue metabolites derived from HR-MAS NMR showed significant differences (P < 0.001) between benign and BCa samples. Noticeably, striking metabolic signatures were observed even for early stage BCa tissues (Ta-T1) demonstrating the sensitivity in detecting BCa. With the goal of cross-validating metabolic signatures derived from HR-MAS NMR, we utilized the same tissue samples to analyze eight metabolites through gas chromatography-mass spectrometry (GC-MS)-targeted analysis, which undoubtedly complements HR-MAS NMR derived metabolomic information. Cross-validation through GC-MS clearly demonstrates the utility of straightforward, non-destructive and rapid HR-MAS NMR technique for clinical diagnosis of BCa with even greater sensitivity. In addition to its utility as a diagnostic tool, these studies will lead to a better understanding of aberrant metabolic pathways in cancer as well as the design and implementation of personalized cancer therapy through metabolic modulation. PMID:23731241

  6. Bayesian cross-validation for model evaluation and selection with application to the North American breeding survey

    USGS Publications Warehouse

    Link, William; Sauer, John

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last fifteen years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion (BPIC) and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion (WAIC). We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using 3 large data sets from the North American Breeding Bird Survey.

  7. Cross-validation of the reduced form of the Food Craving Questionnaire-Trait using confirmatory factor analysis

    PubMed Central

    Iani, Luca; Barbaranelli, Claudio; Lombardo, Caterina

    2015-01-01

    Objective: The Food Craving Questionnaire-Trait (FCQ-T) is commonly used to assess habitual food cravings among individuals. Previous studies have shown that a brief version of this instrument (FCQ-T-r) has good reliability and validity. This article is the first to use Confirmatory factor analysis to examine the psychometric properties of the FCQ-T-r in a cross-validation study. Method: Habitual food cravings, as well as emotion regulation strategies, affective states, and disordered eating behaviors, were investigated in two independent samples of non-clinical adult volunteers (Sample 1: N = 368; Sample 2: N = 246). Confirmatory factor analyses were conducted to simultaneously test model fit statistics and dimensionality of the instrument. FCQ-T-r reliability was assessed by computing the composite reliability coefficient. Results: Analysis supported the unidimensional structure of the scale and fit indices were acceptable for both samples. The FCQ-T-r showed excellent reliability and moderate to high correlations with negative affect and disordered eating. Conclusion: Our results indicate that the FCQ-T-r scores can be reliably used to assess habitual cravings in an Italian non-clinical sample of adults. The robustness of these results is tested by a cross-validation of the model using two independent samples. Further research is required to expand on these findings, particularly in children and adolescents. PMID:25918510

  8. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  9. Radioactive quality evaluation and cross validation of data from the HJ-1A/B satellites' CCD sensors.

    PubMed

    Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai

    2013-01-01

    Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency.

  10. Specific binding of gibberellic acid by cytokinin-specific binding proteins: a new aspect of plant hormone-binding proteins with the PR-10 fold.

    PubMed

    Ruszkowski, Milosz; Sliwiak, Joanna; Ciesielska, Agnieszka; Barciszewski, Jakub; Sikorski, Michal; Jaskolski, Mariusz

    2014-07-01

    Pathogenesis-related proteins of class 10 (PR-10) are a family of plant proteins with the same fold characterized by a large hydrophobic cavity that allows them to bind various ligands, such as phytohormones. A subfamily with only ~20% sequence identity but with a conserved canonical PR-10 fold have previously been recognized as Cytokinin-Specific Binding Proteins (CSBPs), although structurally the binding mode of trans-zeatin (a cytokinin phytohormone) was found to be quite diversified. Here, it is shown that two CSBP orthologues from Medicago truncatula and Vigna radiata bind gibberellic acid (GA3), which is an entirely different phytohormone, in a conserved and highly specific manner. In both cases a single GA3 molecule is found in the internal cavity of the protein. The structural data derived from high-resolution crystal structures are corroborated by isothermal titration calorimetry (ITC), which reveals a much stronger interaction with GA3 than with trans-zeatin and pH dependence of the binding profile. As a conclusion, it is postulated that the CSBP subfamily of plant PR-10 proteins should be more properly linked with general phytohormone-binding properties and termed phytohormone-binding proteins (PhBP).

  11. The copper-mobilizing-potential of dissolved organic matter in soils varies 10-fold depending on soil incubation and extraction procedures.

    PubMed

    Amery, Fien; Degryse, Fien; Degeling, Wim; Smolders, Erik; Merckx, Roel

    2007-04-01

    Copper is mobilized in soil by dissolved organic matter (DOM) but the role of DOM quality in this process is unclear. A one-step resin-exchange method was developed to measure the Cu-Mobilizing-Potential (CuMP) of DOM at pCu 11.3 and pH 7.0, representing background values. The CuMP of DOM was measured in soil solutions of 13 uncontaminated soils with different DOM extraction methods. The CuMP, expressed per unit dissolved organic carbon (DOC), varied 10-fold and followed the order water extracts > 0.01 M CaCl2 extracts > pore water. Soil solutions, obtained from soils that were stored air-dry for a long time or were subjected to drying-wetting cycles, had elevated DOC concentration, but the DOM had a low CuMP. Prolonged soil incubations decreased the DOC concentration and increased the CuMP, suggesting that most of the initially elevated DOM is less humified and has lower Cu affinity than DOM remaining after incubation. A significant positive correlation between the specific UV-absorption of DOM (indicating aromaticity) and CuMP was found for all DOM samples (R(2) = 0.58). It is concluded that the DOC concentration in soil is an insufficient predictor for the Cu mobilization and that DOM samples isolated from air-dried soils are distinct from those of soils kept moist. PMID:17438775

  12. Simulating California Reservoir Operation Using the Classification and Regression Tree Algorithm Combined with a Shuffled Cross-Validation Scheme

    NASA Astrophysics Data System (ADS)

    Yang, T.; Gao, X.; Sorooshian, S.; Li, X.

    2015-12-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs, and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as the consideration of policy and regulation, environmental constraints, dry/wet conditions, etc. In this paper, a reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve model's predictive performance. An application study of 9 major reservoirs in California is carried out and the simulated results from different decision tree approaches are compared with observation, including original CART and Random Forest. The statistical measurements show that CART combined with the shuffled cross-validation scheme gives a better predictive performance over the other two methods, especially in simulating the peak flows. The results for simulated controlled outflow, storage changes and storage trajectories also show that the proposed model is able to consistently and reasonably predict the human's reservoir operation decisions. In addition, we found that the operation in the Trinity Lake, Oroville Lake and Shasta Lake are greatly influenced by policy and regulation, while low elevation reservoirs are more sensitive to inflow amount than others.

  13. Cross-validation of spaceborne radar and ground polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven Matthew

    There is great potential for spaceborne weather radar to make significant observations of the precipitating medium on global scales. The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall in the tropics from space using radar. The Precipitation Radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit at 350 km altitude and 35 degree inclination. The PR is a single frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant, which can be as high as 10--15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR returns. Direct inter-comparison of meteorological measurements between space and ground radar observations can be used to evaluate spaceborne processing algorithms. Though conceptually straightforward, this can be a challenging task. Differences in viewing aspects between space and earth point observations, propagation frequencies, resolution volume size and time synchronization mismatch between measurements can contribute to direct point-by-point inter-comparison errors. The problem is further complicated by spatial geometric distortions induced into the space-based observations caused by the movements and attitude perturbations of the spacecraft itself. A method is developed to align space and ground radar observations so that a point-by-point inter-comparison of measurements can be made. Ground-based polarimetric observations are used to estimate the attenuation of PR signal returns along individual PR beams, and a technique is formulated to determine the true PR return from GR measurements via theoretical modeling of specific attenuation (k) at PR wavelength with ground-based S-band radar observations. The statistical behavior of the parameters

  14. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  15. How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection.

    PubMed

    Soch, Joram; Haynes, John-Dylan; Allefeld, Carsten

    2016-11-01

    Voxel-wise general linear models (GLMs) are a standard approach for analyzing functional magnetic resonance imaging (fMRI) data. An advantage of GLMs is that they are flexible and can be adapted to the requirements of many different data sets. However, the specification of first-level GLMs leaves the researcher with many degrees of freedom which is problematic given recent efforts to ensure robust and reproducible fMRI data analysis. Formal model comparisons that allow a systematic assessment of GLMs are only rarely performed. On the one hand, too simple models may underfit data and leave real effects undiscovered. On the other hand, too complex models might overfit data and also reduce statistical power. Here we present a systematic approach termed cross-validated Bayesian model selection (cvBMS) that allows to decide which GLM best describes a given fMRI data set. Importantly, our approach allows for non-nested model comparison, i.e. comparing more than two models that do not just differ by adding one or more regressors. It also allows for spatially heterogeneous modelling, i.e. using different models for different parts of the brain. We validate our method using simulated data and demonstrate potential applications to empirical data. The increased use of model comparison and model selection should increase the reliability of GLM results and reproducibility of fMRI studies.

  16. Simulating California reservoir operation using the classification and regression-tree algorithm combined with a shuffled cross-validation scheme

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Gao, Xiaogang; Sorooshian, Soroosh; Li, Xin

    2016-03-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as water delivery requirement, environmental constraints, dry/wet conditions, etc. In this paper, a robust reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve CART's predictive performance. An application study of nine major reservoirs in California is carried out. Results produced by the enhanced CART, original CART, and random forest are compared with observation. The statistical measurements show that the enhanced CART and random forest overperform the CART control run in general, and the enhanced CART algorithm gives a better predictive performance over random forest in simulating the peak flows. The results also show that the proposed model is able to consistently and reasonably predict the expert release decisions. Experiments indicate that the release operation in the Oroville Lake is significantly dominated by SWP allocation amount and reservoirs with low elevation are more sensitive to inflow amount than others.

  17. Cross-Validation of a Recently Published Equation Predicting Energy Expenditure to Run or Walk a Mile in Normal-Weight and Overweight Adults

    ERIC Educational Resources Information Center

    Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark

    2014-01-01

    An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…

  18. Cross-validation of generalised body composition equations with diverse young men and women: the Training Intervention and Genetics of Exercise Response (TIGER) Study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...

  19. A Test and Cross-Validation of the Revised Two-Factor Study Process Questionnaire Factor Structure among Western University Students

    ERIC Educational Resources Information Center

    Immekus, Jason C.; Imbrie, P. K.

    2010-01-01

    The Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a measure of university students' approach to learning. Original evaluation of the scale's psychometric properties was based on a sample of Hong Kong university students' scores. The purpose of this study was to test and cross-validate the R-SPQ-2F factor structure, based on separate…

  20. Methane Cross-Validation Between Spaceborne Solar Occultation Observations from ACE-FTS, Spaceborne Nadir Sounding from Gosat, and Ground-Based Solar Absorption Measurements, at a High Arctic Site.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S. A.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2014-12-01

    We present cross-validation of remote sensing observations of methane profiles in the Canadian High Arctic. Methane is the third most important greenhouse gas on Earth, and second only to carbon dioxide in its contribution to anthropogenic global warming. Accurate and precise observations of methane are essential to understand quantitatively its role in the climate system and in global change. The Arctic is a particular region of concern, as melting permafrost and disappearing sea ice might lead to accelerated release of methane into the atmosphere. Global observations require spaceborne instruments, in particular in the Arctic, where surface measurements are sparse and expensive to perform. Satellite-based remote sensing is an underconstrained problem, and specific validation under Arctic circumstances is required. Here, we show a cross-validation between two spaceborne instruments and ground-based measurements, all Fourier Transform Spectrometers (FTS). We consider the Canadian SCISAT ACE-FTS, a solar occultation spectrometer operating since 2004, and the Japanese GOSAT TANSO-FTS, a nadir-pointing FTS operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environmental and Atmospheric Research Laboratory (PEARL) at Eureka, Nunavut (80°N, 86°W) since 2006. Measurements are collocated considering temporal, spatial, and geophysical criteria and regridded to a common vertical grid. We perform smoothing on the higher-resolution instrument results to account for different vertical resolutions. Then, profiles of differences for each pair of instruments are examined. Any bias between instruments, or any accuracy that is worse than expected, needs to be understood prior to using the data. The results of the study will serve as a guideline on how to use the vertically resolved methane products from ACE and

  1. Predicting Chinese Children and Youth's Energy Expenditure Using ActiGraph Accelerometers: A Calibration and Cross-Validation Study

    ERIC Educational Resources Information Center

    Zhu, Zheng; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…

  2. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  3. Cross validation of geotechnical and geophysical site characterization methods: near surface data from selected accelerometric stations in Crete (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, C.; Tsangaratos, P.; Rozos, D.; Rondoyianni, Th.; Vafidis, A.; Kritikakis, G.; Steiakakis, M.; Agioutantis, Z.; Savvaidis, A.; Soupios, P.; Papadopoulos, I.; Papadopoulos, N.; Sarris, A.; Mangriotis, M.-D.; Dikmen, U.

    2015-06-01

    The specification of the near surface ground conditions is highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding settlements, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined, through the Thalis ″Geo-Characterization″ project, for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of site characterization data is going to be presented by providing characteristic examples from a total number of thirteen sites. Selected examples present sufficiently the ability, the limitations and the right order of the investigation methods.

  4. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556

  5. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography.

  6. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  7. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use.

  8. Cross-validation of the osmotic pressure based on Pitzer model with air humidity osmometry at high concentration of ammonium sulfate solutions.

    PubMed

    Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong

    2014-01-01

    The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.

  9. Body fat measurement by bioelectrical impedance and air displacement plethysmography: a cross-validation study to design bioelectrical impedance equations in Mexican adults

    PubMed Central

    Macias, Nayeli; Alemán-Mateo, Heliodoro; Esparza-Romero, Julián; Valencia, Mauro E

    2007-01-01

    Background The study of body composition in specific populations by techniques such as bio-impedance analysis (BIA) requires validation based on standard reference methods. The aim of this study was to develop and cross-validate a predictive equation for bioelectrical impedance using air displacement plethysmography (ADP) as standard method to measure body composition in Mexican adult men and women. Methods This study included 155 male and female subjects from northern Mexico, 20–50 years of age, from low, middle, and upper income levels. Body composition was measured by ADP. Body weight (BW, kg) and height (Ht, cm) were obtained by standard anthropometric techniques. Resistance, R (ohms) and reactance, Xc (ohms) were also measured. A random-split method was used to obtain two samples: one was used to derive the equation by the "all possible regressions" procedure and was cross-validated in the other sample to test predicted versus measured values of fat-free mass (FFM). Results and Discussion The final model was: FFM (kg) = 0.7374 * (Ht2 /R) + 0.1763 * (BW) - 0.1773 * (Age) + 0.1198 * (Xc) - 2.4658. R2 was 0.97; the square root of the mean square error (SRMSE) was 1.99 kg, and the pure error (PE) was 2.96. There was no difference between FFM predicted by the new equation (48.57 ± 10.9 kg) and that measured by ADP (48.43 ± 11.3 kg). The new equation did not differ from the line of identity, had a high R2 and a low SRMSE, and showed no significant bias (0.87 ± 2.84 kg). Conclusion The new bioelectrical impedance equation based on the two-compartment model (2C) was accurate, precise, and free of bias. This equation can be used to assess body composition and nutritional status in populations similar in anthropometric and physical characteristics to this sample. PMID:17697388

  10. SILAC-Pulse Proteolysis: A Mass Spectrometry-Based Method for Discovery and Cross-Validation in Proteome-Wide Studies of Ligand Binding

    NASA Astrophysics Data System (ADS)

    Adhikari, Jagat; Fitzgerald, Michael C.

    2014-12-01

    Reported here is the use of stable isotope labeling with amino acids in cell culture (SILAC) and pulse proteolysis (PP) for detection and quantitation of protein-ligand binding interactions on the proteomic scale. The incorporation of SILAC into PP enables the PP technique to be used for the unbiased detection and quantitation of protein-ligand binding interactions in complex biological mixtures (e.g., cell lysates) without the need for prefractionation. The SILAC-PP technique is demonstrated in two proof-of-principle experiments using proteins in a yeast cell lysate and two test ligands including a well-characterized drug, cyclosporine A (CsA), and a non-hydrolyzable adenosine triphosphate (ATP) analogue, adenylyl imidodiphosphate (AMP-PNP). The well-known tight-binding interaction between CsA and cyclophilin A was successfully detected and quantified in replicate analyses, and a total of 33 proteins from a yeast cell lysate were found to have AMP-PNP-induced stability changes. In control experiments, the method's false positive rate of protein target discovery was found to be in the range of 2.1% to 3.6%. SILAC-PP and the previously reported stability of protein from rates of oxidation (SPROX) technique both report on the same thermodynamic properties of proteins and protein-ligand complexes. However, they employ different probes and mass spectrometry-based readouts. This creates the opportunity to cross-validate SPROX results with SILAC-PP results, and vice-versa. As part of this work, the SILAC-PP results obtained here were cross-validated with previously reported SPROX results on the same model systems to help differentiate true positives from false positives in the two experiments.

  11. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  12. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed.

  13. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding

    PubMed Central

    Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  14. FDDS: A Cross Validation Study.

    ERIC Educational Resources Information Center

    Sawyer, Judy Parsons

    The Family Drawing Depression Scale (FDDS) was created by Wright and McIntyre to provide a clear and reliable scoring method for the Kinetic Family Drawing as a procedure for detecting depression. A study was conducted to confirm the value of the FDDS as a systematic tool for interpreting family drawings with populations of depressed individuals.…

  15. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7-8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  16. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7–8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  17. The joint WAIS-III and WMS-III factor structure: development and cross-validation of a six-factor model of cognitive functioning.

    PubMed

    Tulsky, David S; Price, Larry R

    2003-06-01

    During the standardization of the Wechsler Adult Intelligence Scale (3rd ed.; WAIS-III) and the Wechsler Memory Scale (3rd ed.; WMS-III) the participants in the normative study completed both scales. This "co-norming" methodology set the stage for full integration of the 2 tests and the development of an expanded structure of cognitive functioning. Until now, however, the WAIS-III and WMS-III had not been examined together in a factor analytic study. This article presents a series of confirmatory factor analyses to determine the joint WAIS-III and WMS-III factor structure. Using a structural equation modeling approach, a 6-factor model that included verbal, perceptual, processing speed, working memory, auditory memory, and visual memory constructs provided the best model fit to the data. Allowing select subtests to load simultaneously on 2 factors improved model fit and indicated that some subtests are multifaceted. The results were then replicated in a large cross-validation sample (N = 858).

  18. Partial cross-validation of the Wechsler Memory Scale-Revised (WMS-R) General Memory-Attention/Concentration Malingering Index in a nonlitigating sample.

    PubMed

    Hilsabeck, Robin C; Thompson, Matthew D; Irby, James W; Adams, Russell L; Scott, James G; Gouvier, Wm Drew

    2003-01-01

    The Wechsler Memory Scale-Revised (WMS-R) malingering indices proposed by Mittenberg, Azrin, Millsaps, and Heilbronner [Psychol Assess 5 (1993) 34.] were partially cross-validated in a sample of 200 nonlitigants. Nine diagnostic categories were examined, including participants with traumatic brain injury (TBI), brain tumor, stroke/vascular, senile dementia of the Alzheimer's type (SDAT), epilepsy, depression/anxiety, medical problems, and no diagnosis. Results showed that the discriminant function using WMS-R subtests misclassified only 6.5% of the sample as malingering, with significantly higher misclassification rates of SDAT and stroke/vascular groups. The General Memory Index-Attention/Concentration Index (GMI-ACI) difference score misclassified only 8.5% of the sample as malingering when a difference score of greater than 25 points was used as the cutoff criterion. No diagnostic group was significantly more likely to be misclassified. Results support the utility of the GMI-ACI difference score, as well as the WMS-R subtest discriminant function score, in detecting malingering.

  19. Cross-validation of the structure of a transiently formed and low populated FF domain folding intermediate determined by relaxation dispersion NMR and CS-Rosetta.

    PubMed

    Barette, Julia; Velyvis, Algirdas; Religa, Tomasz L; Korzhnev, Dmitry M; Kay, Lewis E

    2012-06-14

    We have recently reported the atomic resolution structure of a low populated and transiently formed on-pathway folding intermediate of the FF domain from human HYPA/FBP11 [Korzhnev, D. M.; Religa, T. L.; Banachewicz, W.; Fersht, A. R.; Kay, L.E. Science 2011, 329, 1312-1316]. The structure was determined on the basis of backbone chemical shift and bond vector orientation restraints of the invisible intermediate state measured using relaxation dispersion nuclear magnetic resonance (NMR) spectroscopy that were subsequently input into the database structure determination program, CS-Rosetta. As a cross-validation of the structure so produced, we present here the solution structure of a mimic of the folding intermediate that is highly populated in solution, obtained from the wild-type domain by mutagenesis that destabilizes the native state. The relaxation dispersion/CS-Rosetta structures of the intermediate are within 2 Å of those of the mimic, with the nonnative interactions in the intermediate also observed in the mimic. This strongly confirms the structure of the FF domain folding intermediate, in particular, and validates the use of relaxation dispersion derived restraints in structural studies of invisible excited states, in general.

  20. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the

  1. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  2. Methane cross-validation between three Fourier Transform Spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2015-12-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three datasets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier Transform Spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Lab at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and between 1.5 and 3

  3. Methane cross-validation between three Fourier transform spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, Gerrit; Walker, Kaley A.; Conway, Stephanie; Saitoh, Naoko; Boone, Chris D.; Strong, Kimberly; Drummond, James R.

    2016-05-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three data sets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier transform spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier transform infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Laboratory at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional collocation criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and

  4. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  5. Duration of opioid antagonism by nalmefene and naloxone in the dog. A nonparametric pharmacodynamic comparison based on generalized cross-validated spline estimation.

    PubMed

    Wilhelm, J A; Veng-Pedersen, P; Zakszewski, T B; Osifchin, E; Waters, S J

    1995-10-01

    The opioid antagonist nalmefene was compared in its pharmacodynamic properties to the structurally similar antagonist naloxone in a 2 x 2 cross-over study with 8 dogs. Opioid-induced respiratory depression was produced for ca. 7 hours with a constant rate intravenous infusion of 30 micrograms/kg/hr fentanyl and quantified using noninvasive transcutaneous pCO2 recordings. Upon reaching a pseudo-steady state of respiratory depression at 2 hours post fentanyl infusion initiation, the animals then received either nalmefene (12 micrograms/kg/hr) or naloxone (48 micrograms/kg/hr) for 30 minutes. The pharmacodynamic pCO2 responses produced by the combined agonist/antagonist regimen were fitted with a cubic spline function using a generalized cross-validation technique. Various quantities that describe the onset, duration and relative potency of each antagonist were determined directly from the estimated response curves in a model-independent, nonparametric way. The 2 antagonists were compared in terms of these quantities using a statistical model that considers carry-over effects typically arising from a possible development of tolerance. The results indicate that nalmefene: 1. is approximately 4-fold more potent than naloxone, 2. has an onset of reversal as rapid as naloxone, and 3. has a significantly longer (2-fold) pharmacodynamic duration of action than does naloxone. The mean time required for the agonist to regain 30% or 50% of its effect present at the start of the antagonist infusion was 66 and 112 minutes and 37 and 55 minutes for nalmefene and naloxone, respectively. Early, effective pharmacodynamic screening of new drug compounds is a valuable way of accelerating the drug discovery process and reducing escalating drug development costs. This study exemplifies a novel, endpoint oriented pharmacodynamic comparison procedure that can be done expeditiously before starting the time consuming development and validation of a drug level assay, and before engaging in

  6. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  7. Determination of snow avalanche return periods using a tree-ring based reconstruction in the French Alps: cross validation with the predictions of a statistical-dynamical model

    NASA Astrophysics Data System (ADS)

    Schläppy, Romain; Eckert, Nicolas; Jomelli, Vincent; Grancher, Delphine; Brunstein, Daniel; Stoffel, Markus; Naaim, Mohamed

    2013-04-01

    rare events, i.e. to the tail of the local runout distance distribution. Furthermore, a good agreement exists with the statistical-numerical model's prediction, i.e. a 10-40 m difference for return periods ranging between 10 and 300 years, which is rather small with regards to the uncertainty levels to be considered in avalanche probabilistic modeling and dendrochronological reconstructions. It is important to note that such a cross validation on independent extreme predictions has never been undertaken before. It suggest that i) dendrochronological reconstruction can provide valuable information for anticipating future extreme avalanche events in the context of risk management, and, in turn, that ii) the statistical-numerical model, while properly calibrated, can be used with reasonable confidence to refine these predictions, with for instance evaluation of pressure and flow depth distributions at each position of the runout zone. A strong sensitivity to the determination of local avalanche and dendrological record frequencies is however highlighted, indicating that this step is an essential step for an accurate probabilistic characterization of large-extent events.

  8. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  9. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  10. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  11. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  12. SherLoc2: a high-accuracy hybrid method for predicting subcellular localization of proteins.

    PubMed

    Briesemeister, Sebastian; Blum, Torsten; Brady, Scott; Lam, Yin; Kohlbacher, Oliver; Shatkay, Hagit

    2009-11-01

    SherLoc2 is a comprehensive high-accuracy subcellular localization prediction system. It is applicable to animal, fungal, and plant proteins and covers all main eukaryotic subcellular locations. SherLoc2 integrates several sequence-based features as well as text-based features. In addition, we incorporate phylogenetic profiles and Gene Ontology (GO) terms derived from the protein sequence to considerably improve the prediction performance. SherLoc2 achieves an overall classification accuracy of up to 93% in 5-fold cross-validation. A novel feature, DiaLoc, allows users to manually provide their current background knowledge by describing a protein in a short abstract which is then used to improve the prediction. SherLoc2 is available both as a free Web service and as a stand-alone version at http://www-bs.informatik.uni-tuebingen.de/Services/SherLoc2.

  13. Atomic-accuracy models from 4.5-Å cryo-electron microscopy data with density-guided iterative local refinement.

    PubMed

    DiMaio, Frank; Song, Yifan; Li, Xueming; Brunner, Matthias J; Xu, Chunfu; Conticello, Vincent; Egelman, Edward; Marlovits, Thomas C; Cheng, Yifan; Baker, David

    2015-04-01

    We describe a general approach for refining protein structure models on the basis of cryo-electron microscopy maps with near-atomic resolution. The method integrates Monte Carlo sampling with local density-guided optimization, Rosetta all-atom refinement and real-space B-factor fitting. In tests on experimental maps of three different systems with 4.5-Å resolution or better, the method consistently produced models with atomic-level accuracy largely independently of starting-model quality, and it outperformed the molecular dynamics-based MDFF method. Cross-validated model quality statistics correlated with model accuracy over the three test systems.

  14. Predictive accuracy of bioelectrical impedance in estimating body composition of Native American women.

    PubMed

    Stolarczyk, L M; Heyward, V H; Hicks, V L; Baumgartner, R N

    1994-05-01

    The predictive accuracy of race-specific and fatness-specific bioelectrical impedance analysis (BIA) equations for estimating criterion fat-free mass (FFM) derived from two-component (2C) and multicomponent (MC) models was examined. Body density (Db) of Native American women (n = 151) aged 18-60 y was measured by hydrostatic weighing at residual volume. Total body bone ash was obtained by dual-energy, x-ray absorptiometry. Cross-validation of the Rising (5), Segal (3), and Gray (4) equations against FFM2C yielded high correlation coefficients (0.86-0.95) and acceptable SEEs (1.47-2.72 kg). Cross-validation of these equations against criterion FFMMC, with Db adjusted for total body mineral, yielded similar correlation coefficients (0.82-0.94) and SEEs (1.69-2.80 kg). However, each BIA equation significantly overestimated FFMMC. A new race-specific BIA equation based on an MC model was developed: FFMMC = 0.001254(HT2)-0.04904(R) + 0.1555(WT) + 0.1417(Xc) - 0.0833(AGE) + 20.05 (R = 0.864, and SEE = 2.63 kg). PMID:8172101

  15. Near surface geotechnical and geophysical data cross validated for site characterization applications. The cases of selected accelerometric stations in Crete island (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, Constantinos; Tsangaratos, Paraskevas; Rozos, Dimitrios; Rondoyianni, Theodora; Vafidis, Antonis; Steiakakis, Emanouil; Agioutantis, Zacharias; Savvaidis, Alexandros; Soupios, Pantelis; Papadopoulos, Ioannis; Papadopoulos, Nikos; Sarris, Apostolos; Mangriotis, Maria-Dafni; Dikmen, Unal

    2015-04-01

    The near surface ground conditions are highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding deformations, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1-D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of the site characterization data in regards to the applied investigation techniques is presented by providing characteristic examples from the total number of thirteen sites. As an example of the gradual improvement of the knowledge about the ground conditions the case of AGN1 strong motion station, located at Agios Nikolaos city (Eastern Crete), is briefly presented. According to the medium scale geological map of IGME the station was supposed to be founded over limestone. The detailed geological mapping reveled that a few meters of loose alluvial deposits occupy the area, expected

  16. Direct spectral analysis of tea samples using 266 nm UV pulsed laser-induced breakdown spectroscopy and cross validation of LIBS results with ICP-MS.

    PubMed

    Gondal, M A; Habibullah, Y B; Baig, Umair; Oloore, L E

    2016-05-15

    Tea is one of the most common and popular beverages spanning vast array of cultures all over the world. The main nutritional benefits of drinking tea are its anti-oxidant properties, presumed protection against certain cancers, inhibition of inflammation and possible protective effects against diabetes. Laser induced breakdown spectrometer (LIBS) was assembled as a powerful tool for qualitative and quantitative analysis of various brands of tea samples using 266 nm pulsed UV laser. LIBS spectra for six brands of tea samples in the wavelength range of 200-900 nm was recorded and all elements present in our tea samples were identified. The major toxic elements detected in several brands of tea samples were bromine, chromium and minerals like iron, calcium, potassium and silicon. The spectral assignment was conducted prior to the determination of concentration of each element. For quantitative analysis, calibration curves were drawn for each element using standard samples prepared in known concentration in the tea matrix. The plasma parameters (electron temperature and electron density) were also determined prior to the tea samples spectroscopic analysis. The concentration of iron, chromium, potassium, bromine, copper, silicon and calcium detected in all tea samples was between 378-656, 96-124, 1421-6785, 99-1476, 17-36, 2-11 and 92-130 mg L(-1) respectively. The limits of detection estimated for Fe, Cr, K, Br, Cu, Si, Ca in tea samples were 22, 12, 14, 11, 6, 1 and 12 mg L(-1) respectively. To further confirm the accuracy of our LIBS results, we determined the concentration of each element present in tea samples by using standard analytical technique like ICP-MS. The concentrations detected with our LIBS system are in excellent agreement with ICP-MS results. The system assembled for spectral analysis in this work could be highly applicable for testing the quality and purity of food and also pharmaceuticals products. PMID:26992530

  17. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  18. Using Genetic Distance to Infer the Accuracy of Genomic Prediction.

    PubMed

    Scutari, Marco; Mackay, Ian; Balding, David

    2016-09-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  19. Using Genetic Distance to Infer the Accuracy of Genomic Prediction

    PubMed Central

    Scutari, Marco; Mackay, Ian

    2016-01-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  20. Feasibility and Diagnostic Accuracy of Ischemic Stroke Territory Recognition Based on Two-Dimensional Projections of Three-Dimensional Diffusion MRI Data

    PubMed Central

    Wrosch, Jana Katharina; Volbers, Bastian; Gölitz, Philipp; Gilbert, Daniel Frederic; Schwab, Stefan; Dörfler, Arnd; Kornhuber, Johannes; Groemer, Teja Wolfgang

    2015-01-01

    This study was conducted to assess the feasibility and diagnostic accuracy of brain artery territory recognition based on geoprojected two-dimensional maps of diffusion MRI data in stroke patients. In this retrospective study, multiplanar diffusion MRI data of ischemic stroke patients was used to create a two-dimensional map of the entire brain. To guarantee correct representation of the stroke, a computer-aided brain artery territory diagnosis was developed and tested for its diagnostic accuracy. The test recognized the stroke-affected brain artery territory based on the position of the stroke in the map. The performance of the test was evaluated by comparing it to the reference standard of each patient’s diagnosed stroke territory on record. This study was designed and conducted according to Standards for Reporting of Diagnostic Accuracy (STARD). The statistical analysis included diagnostic accuracy parameters, cross-validation, and Youden Index optimization. After cross-validation on a cohort of 91 patients, the sensitivity of this territory diagnosis was 81% with a specificity of 87%. With this, the projection of strokes onto a two-dimensional map is accurate for representing the affected stroke territory and can be used to provide a static and printable overview of the diffusion MRI data. The projected map is compatible with other two-dimensional data such as EEG and will serve as a useful visualization tool. PMID:26635717

  1. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.

  2. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  3. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  4. Cross-validation and evaluation of the performance of methods for the elemental analysis of forensic glass by μ-XRF, ICP-MS, and LA-ICP-MS.

    PubMed

    Trejos, Tatiana; Koons, Robert; Becker, Stefan; Berman, Ted; Buscaglia, JoAnn; Duecking, Marc; Eckert-Lumsdon, Tiffany; Ernst, Troy; Hanlon, Christopher; Heydon, Alex; Mooney, Kim; Nelson, Randall; Olsson, Kristine; Palenik, Christopher; Pollock, Edward Chip; Rudell, David; Ryland, Scott; Tarifa, Anamary; Valadez, Melissa; Weis, Peter; Almirall, Jose

    2013-06-01

    Elemental analysis of glass was conducted by 16 forensic science laboratories, providing a direct comparison between three analytical methods [micro-x-ray fluorescence spectroscopy (μ-XRF), solution analysis using inductively coupled plasma mass spectrometry (ICP-MS), and laser ablation inductively coupled plasma mass spectrometry]. Interlaboratory studies using glass standard reference materials and other glass samples were designed to (a) evaluate the analytical performance between different laboratories using the same method, (b) evaluate the analytical performance of the different methods, (c) evaluate the capabilities of the methods to correctly associate glass that originated from the same source and to correctly discriminate glass samples that do not share the same source, and (d) standardize the methods of analysis and interpretation of results. Reference materials NIST 612, NIST 1831, FGS 1, and FGS 2 were employed to cross-validate these sensitive techniques and to optimize and standardize the analytical protocols. The resulting figures of merit for the ICP-MS methods include repeatability better than 5% RSD, reproducibility between laboratories better than 10% RSD, bias better than 10%, and limits of detection between 0.03 and 9 μg g(-1) for the majority of the elements monitored. The figures of merit for the μ-XRF methods include repeatability better than 11% RSD, reproducibility between laboratories after normalization of the data better than 16% RSD, and limits of detection between 5.8 and 7,400 μg g(-1). The results from this study also compare the analytical performance of different forensic science laboratories conducting elemental analysis of glass evidence fragments using the three analytical methods.

  5. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  6. Correlates of Near-Infrared Spectroscopy Brain–Computer Interface Accuracy in a Multi-Class Personalization Framework

    PubMed Central

    Weyand, Sabine; Chau, Tom

    2015-01-01

    Brain–computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS–BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual’s best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson’s) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = −0.749). PMID:26483657

  7. Revealing latent value of clinically acquired CTs of traumatic brain injury through multi-atlas segmentation in a retrospective study of 1,003 with external cross-validation

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Kelly, Patrick D.; Asman, Andrew J.; Kang, Hakmook; Patel, Mayur B.; Landman, Bennett A.

    2015-03-01

    Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold crossvalidation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R2 to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.

  8. Accuracy of direct genomic values for functional traits in Brown Swiss cattle.

    PubMed

    Kramer, M; Erbe, M; Seefried, F R; Gredler, B; Bapst, B; Bieber, A; Simianer, H

    2014-03-01

    In this study, direct genomic values for the functional traits general temperament, milking temperament, aggressiveness, rank order in herd, milking speed, udder depth, position of labia, and days to first heat in Brown Swiss dairy cattle were estimated based on ~777,000 (777 K) single nucleotide polymorphism (SNP) information from 1,126 animals. Accuracy of direct genomic values was assessed by a 5-fold cross-validation with 10 replicates. Correlations between deregressed proofs and direct genomic values were 0.63 for general temperament, 0.73 for milking temperament, 0.69 for aggressiveness, 0.65 for rank order in herd, 0.69 for milking speed, 0.71 for udder depth, 0.66 for position of labia, and 0.74 for days to first heat. Using the information of ~54,000 (54K) SNP led to only marginal deviations in the observed accuracy. Trying to predict the 20% youngest bulls led to correlations of 0.55, 0.77, 0.73, 0.55, 0.64, 0.59, 0.67, and 0.77, respectively, for the traits listed above. Using a novel method to estimate the accuracy of a direct genomic value (defined as correlation between direct genomic value and true breeding value and accounting for the correlation between direct genomic values and conventional breeding values) revealed accuracies of 0.37, 0.20, 0.19, 0.27, 0.48, 0.45, 0.36, and 0.12, respectively, for the traits listed above. These values are much smaller but probably also more realistic than accuracies based on correlations, given the heritabilities and samples sizes in this study. Annotation of the largest estimated SNP effects revealed 2 candidate genes affecting the traits general temperament and days to first heat.

  9. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  10. Accuracy of prediction of genomic breeding values for residual feed intake and carcass and meat quality traits in Bos taurus, Bos indicus, and composite beef cattle.

    PubMed

    Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E

    2013-07-01

    The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait. PMID:23658330

  11. Accuracy of prediction of genomic breeding values for residual feed intake and carcass and meat quality traits in Bos taurus, Bos indicus, and composite beef cattle.

    PubMed

    Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E

    2013-07-01

    The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.

  12. Accuracy comparison of spatial interpolation methods for estimation of air temperatures in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Shim, K.; Jung, M.; Kim, S.

    2013-12-01

    Because of complex terrain, micro- as well as meso-climate variability is extreme by locations in Korea. In particular, air temperature of agricultural fields are influenced by topographic features of the surroundings making accurate interpolation of regional meteorological data from point-measured data. This study was conducted to compare accuracy of a spatial interpolation method to estimate air temperature in Korean Peninsula with the rugged terrains in South Korea. Four spatial interpolation methods including Inverse Distance Weighting (IDW), Spline, Kriging and Cokriging were tested to estimate monthly air temperature of unobserved stations. Monthly measured data sets (minimum and maximum air temperature) from 456 automatic weather station (AWS) locations in South Korea were used to generate the gridded air temperature surface. Result of cross validation showed that using Exponential theoretical model produced a lower root mean square error (RMSE) than using Gaussian theoretical model in case of Kriging and Cokriging and Spline produced the lowest RMSE of spatial interpolation methods in both maximum and minimum air temperature estimation. In conclusion, Spline showed the best accuracy among the methods, but further experiments which reflect topography effects such as temperature lapse rate are necessary to improve the prediction.

  13. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  14. Effect of predictor traits on accuracy of genomic breeding values for feed intake based on a limited cow reference population.

    PubMed

    Pszczola, M; Veerkamp, R F; de Haas, Y; Wall, E; Strabel, T; Calus, M P L

    2013-11-01

    The genomic breeding value accuracy of scarcely recorded traits is low because of the limited number of phenotypic observations. One solution to increase the breeding value accuracy is to use predictor traits. This study investigated the impact of recording additional phenotypic observations for predictor traits on reference and evaluated animals on the genomic breeding value accuracy for a scarcely recorded trait. The scarcely recorded trait was dry matter intake (DMI, n = 869) and the predictor traits were fat-protein-corrected milk (FPCM, n = 1520) and live weight (LW, n = 1309). All phenotyped animals were genotyped and originated from research farms in Ireland, the United Kingdom and the Netherlands. Multi-trait REML was used to simultaneously estimate variance components and breeding values for DMI using available predictors. In addition, analyses using only pedigree relationships were performed. Breeding value accuracy was assessed through cross-validation (CV) and prediction error variance (PEV). CV groups (n = 7) were defined by splitting animals across genetic lines and management groups within country. With no additional traits recorded for the evaluated animals, both CV- and PEV-based accuracies for DMI were substantially higher for genomic than for pedigree analyses (CV: max. 0.26 for pedigree and 0.33 for genomic analyses; PEV: max. 0.45 and 0.52, respectively). With additional traits available, the differences between pedigree and genomic accuracies diminished. With additional recording for FPCM, pedigree accuracies increased from 0.26 to 0.47 for CV and from 0.45 to 0.48 for PEV. Genomic accuracies increased from 0.33 to 0.50 for CV and from 0.52 to 0.53 for PEV. With additional recording for LW instead of FPCM, pedigree accuracies increased to 0.54 for CV and to 0.61 for PEV. Genomic accuracies increased to 0.57 for CV and to 0.60 for PEV. With both FPCM and LW available for evaluated animals, accuracy was highest (0.62 for CV and 0.61 for PEV in

  15. Orbit accuracy assessment for Seasat

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.

    1980-01-01

    Laser range measurements are used to determine the orbit of Seasat during the period from July 28, 1978, to Aug. 14, 1978, and the influence of the gravity field, atmospheric drag, and solar radiation pressure on the orbit accuracy is investigated. It is noted that for the orbits of three-day duration, little distinction can be made between the influence of different atmospheric models. It is found that the special Seasat gravity field PGS-S3 is most consistent with the data for three-day orbits, but an unmodeled systematic effect in radiation pressure is noted. For orbits of 18-day duration, little distinction can be made between the results derived from the PGS gravity fields. It is also found that the geomagnetic field is an influential factor in the atmospheric modeling during this time period. Seasat altimeter measurements are used to determine the accuracy of the altimeter measurement time tag and to evaluate the orbital accuracy.

  16. Data Accuracy in Citation Studies.

    ERIC Educational Resources Information Center

    Boyce, Bert R.; Banning, Carolyn Sue

    1979-01-01

    Four hundred eighty-seven citations of the 1976 issues of the Journal of the American Society for Information Science and the Personnel and Guidance Journal were checked for accuracy: total error was 13.6 percent and 10.7 percent, respectively. Error categories included incorrect author name, article/book title, journal title; wrong entry; and…

  17. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  18. Drawing accuracy measured using polygons

    NASA Astrophysics Data System (ADS)

    Carson, Linda; Millard, Matthew; Quehl, Nadine; Danckert, James

    2013-03-01

    The study of drawing, for its own sake and as a probe into human visual perception, generally depends on ratings by human critics and self-reported expertise of the drawers. To complement those approaches, we have developed a geometric approach to analyzing drawing accuracy, one whose measures are objective, continuous and performance-based. Drawing geometry is represented by polygons formed by landmark points found in the drawing. Drawing accuracy is assessed by comparing the geometric properties of polygons in the drawn image to the equivalent polygon in a ground truth photo. There are four distinct properties of a polygon: its size, its position, its orientation and the proportionality of its shape. We can decompose error into four components and investigate how each contributes to drawing performance. We applied a polygon-based accuracy analysis to a pilot data set of representational drawings and found that an expert drawer outperformed a novice on every dimension of polygon error. The results of the pilot data analysis correspond well with the apparent quality of the drawings, suggesting that the landmark and polygon analysis is a method worthy of further study. Applying this geometric analysis to a within-subjects comparison of accuracy in the positive and negative space suggests there is a trade-off on dimensions of error. The performance-based analysis of geometric deformations will allow the study of drawing accuracy at different levels of organization, in a systematic and quantitative manner. We briefly describe the method and its potential applications to research in drawing education and visual perception.

  19. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  20. High accuracy flexural hinge development

    NASA Astrophysics Data System (ADS)

    Santos, I.; Ortiz de Zárate, I.; Migliorero, G.

    2005-07-01

    This document provides a synthesis of the technical results obtained in the frame of the HAFHA (High Accuracy Flexural Hinge Assembly) development performed by SENER (in charge of design, development, manufacturing and testing at component and mechanism levels) with EADS Astrium as subcontractor (in charge of doing an inventory of candidate applications among existing and emerging projects, establishing the requirements and perform system level testing) under ESA contract. The purpose of this project has been to develop a competitive technology for a flexural pivot, usuable in highly accurate and dynamic pointing/scanning mechanisms. Compared with other solutions (e.g. magnetic or ball bearing technologies) flexural hinges are the appropriate technology for guiding with accuracy a mobile payload over a limited angular ranges around one rotation axes.

  1. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  2. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

    PubMed

    Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

    2014-04-01

    Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular.

  3. Interpersonal Deception: V. Accuracy in Deception Detection.

    ERIC Educational Resources Information Center

    Burgoon, Judee K.; And Others

    1994-01-01

    Investigates the influence of several factors on accuracy in detecting truth and deceit. Found that accuracy was much higher on truth than deception, novices were more accurate than experts, accuracy depended on type of deception and whether suspicion was present or absent, suspicion impaired accuracy for experts, and questions strategy…

  4. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  5. Determining gas-meter accuracy

    SciTech Connect

    Valenti, M.

    1997-03-01

    This article describes how engineers at the Metering Research Facility are helping natural-gas companies improve pipeline efficiency by evaluating and refining the instruments used for measuring and setting prices. Accurate metering of natural gas is more important than ever as deregulation subjects pipeline companies to competition. To help improve that accuracy, the Gas Research Institute (GRI) in Chicago has sponsored the Metering Research Facility (MRF) at the Southwest Research Institute (SWRI) in San Antonio, Tex. The MRF evaluates and improves the performance of orifice, turbine, diaphragm, and ultrasonic meters as well as the gas-sampling methods that pipeline companies use to measure the flow of gas and determine its price.

  6. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  7. High accuracy time transfer synchronization

    NASA Astrophysics Data System (ADS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-05-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  8. High-accuracy EUV reflectometer

    NASA Astrophysics Data System (ADS)

    Hinze, U.; Fokoua, M.; Chichkov, B.

    2007-03-01

    Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.

  9. Genomic Prediction in Pea: Effect of Marker Density and Training Population Size and Composition on Prediction Accuracy

    PubMed Central

    Tayeh, Nadim; Klein, Anthony; Le Paslier, Marie-Christine; Jacquin, Françoise; Houtin, Hervé; Rond, Céline; Chabert-Martinello, Marianne; Magnin-Robert, Jean-Bernard; Marget, Pascal; Aubert, Grégoire; Burstin, Judith

    2015-01-01

    Pea is an important food and feed crop and a valuable component of low-input farming systems. Improving resistance to biotic and abiotic stresses is a major breeding target to enhance yield potential and regularity. Genomic selection (GS) has lately emerged as a promising technique to increase the accuracy and gain of marker-based selection. It uses genome-wide molecular marker data to predict the breeding values of candidate lines to selection. A collection of 339 genetic resource accessions (CRB339) was subjected to high-density genotyping using the GenoPea 13.2K SNP Array. Genomic prediction accuracy was evaluated for thousand seed weight (TSW), the number of seeds per plant (NSeed), and the date of flowering (BegFlo). Mean cross-environment prediction accuracies reached 0.83 for TSW, 0.68 for NSeed, and 0.65 for BegFlo. For each trait, the statistical method, the marker density, and/or the training population size and composition used for prediction were varied to investigate their effects on prediction accuracy: the effect was large for the size and composition of the training population but limited for the statistical method and marker density. Maximizing the relatedness between individuals in the training and test sets, through the CDmean-based method, significantly improved prediction accuracies. A cross-population cross-validation experiment was further conducted using the CRB339 collection as a training population set and nine recombinant inbred lines populations as test set. Prediction quality was high with mean Q2 of 0.44 for TSW and 0.59 for BegFlo. Results are discussed in the light of current efforts to develop GS strategies in pea. PMID:26635819

  10. Cochrane diagnostic test accuracy reviews.

    PubMed

    Leeflang, Mariska M G; Deeks, Jonathan J; Takwoingi, Yemisi; Macaskill, Petra

    2013-10-07

    In 1996, shortly after the founding of The Cochrane Collaboration, leading figures in test evaluation research established a Methods Group to focus on the relatively new and rapidly evolving methods for the systematic review of studies of diagnostic tests. Seven years later, the Collaboration decided it was time to develop a publication format and methodology for Diagnostic Test Accuracy (DTA) reviews, as well as the software needed to implement these reviews in The Cochrane Library. A meeting hosted by the German Cochrane Centre in 2004 brought together key methodologists in the area, many of whom became closely involved in the subsequent development of the methodological framework for DTA reviews. DTA reviews first appeared in The Cochrane Library in 2008 and are now an integral part of the work of the Collaboration.

  11. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  12. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  13. Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.

    PubMed

    Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D

    2016-07-01

    Zebu () cattle, mostly of the Nellore breed, comprise more than 80% of the beef cattle in Brazil, given their tolerance of the tropical climate and high resistance to ectoparasites. Despite their advantages for production in tropical environments, zebu cattle tend to produce tougher meat than Bos taurus breeds. Traditional genetic selection to improve meat tenderness is constrained by the difficulty and cost of phenotypic evaluation for meat quality. Therefore, genomic selection may be the best strategy to improve meat quality traits. This study was performed to compare the accuracies of different Bayesian regression models in predicting molecular breeding values for meat tenderness in Polled Nellore cattle. The data set was composed of Warner-Bratzler shear force (WBSF) of longissimus muscle from 205, 141, and 81 animals slaughtered in 2005, 2010, and 2012, respectively, which were selected and mated so as to create extreme segregation for WBSF. The animals were genotyped with either the Illumina BovineHD (HD; 777,000 from 90 samples) chip or the GeneSeek Genomic Profiler (GGP Indicus HD; 77,000 from 337 samples). The quality controls of SNP were Hard-Weinberg Proportion -value ≥ 0.1%, minor allele frequency > 1%, and call rate > 90%. The FImpute program was used for imputation from the GGP Indicus HD chip to the HD chip. The effect of each SNP was estimated using ridge regression, least absolute shrinkage and selection operator (LASSO), Bayes A, Bayes B, and Bayes Cπ methods. Different numbers of SNP were used, with 1, 2, 3, 4, 5, 7, 10, 20, 40, 60, 80, or 100% of the markers preselected based on their significance test (-value from genomewide association studies [GWAS]) or randomly sampled. The prediction accuracy was assessed by the correlation between genomic breeding value and the observed WBSF phenotype, using a leave-one-out cross-validation methodology. The prediction accuracies using all markers were all very similar for all models, ranging from 0

  14. Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.

    PubMed

    Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D

    2016-07-01

    Zebu () cattle, mostly of the Nellore breed, comprise more than 80% of the beef cattle in Brazil, given their tolerance of the tropical climate and high resistance to ectoparasites. Despite their advantages for production in tropical environments, zebu cattle tend to produce tougher meat than Bos taurus breeds. Traditional genetic selection to improve meat tenderness is constrained by the difficulty and cost of phenotypic evaluation for meat quality. Therefore, genomic selection may be the best strategy to improve meat quality traits. This study was performed to compare the accuracies of different Bayesian regression models in predicting molecular breeding values for meat tenderness in Polled Nellore cattle. The data set was composed of Warner-Bratzler shear force (WBSF) of longissimus muscle from 205, 141, and 81 animals slaughtered in 2005, 2010, and 2012, respectively, which were selected and mated so as to create extreme segregation for WBSF. The animals were genotyped with either the Illumina BovineHD (HD; 777,000 from 90 samples) chip or the GeneSeek Genomic Profiler (GGP Indicus HD; 77,000 from 337 samples). The quality controls of SNP were Hard-Weinberg Proportion -value ≥ 0.1%, minor allele frequency > 1%, and call rate > 90%. The FImpute program was used for imputation from the GGP Indicus HD chip to the HD chip. The effect of each SNP was estimated using ridge regression, least absolute shrinkage and selection operator (LASSO), Bayes A, Bayes B, and Bayes Cπ methods. Different numbers of SNP were used, with 1, 2, 3, 4, 5, 7, 10, 20, 40, 60, 80, or 100% of the markers preselected based on their significance test (-value from genomewide association studies [GWAS]) or randomly sampled. The prediction accuracy was assessed by the correlation between genomic breeding value and the observed WBSF phenotype, using a leave-one-out cross-validation methodology. The prediction accuracies using all markers were all very similar for all models, ranging from 0

  15. Comparison of Genomic Selection Models to Predict Flowering Time and Spike Grain Number in Two Hexaploid Wheat Doubled Haploid Populations.

    PubMed

    Thavamanikumar, Saravanan; Dolferus, Rudy; Thumma, Bala R

    2015-10-01

    Genomic selection (GS) is becoming an important selection tool in crop breeding. In this study, we compared the ability of different GS models to predict time to young microspore (TYM), a flowering time-related trait, spike grain number under control conditions (SGNC) and spike grain number under osmotic stress conditions (SGNO) in two wheat biparental doubled haploid populations with unrelated parents. Prediction accuracies were compared using BayesB, Bayesian least absolute shrinkage and selection operator (Bayesian LASSO / BL), ridge regression best linear unbiased prediction (RR-BLUP), partial least square regression (PLS), and sparse partial least square regression (SPLS) models. Prediction accuracy was tested with 10-fold cross-validation within a population and with independent validation in which marker effects from one population were used to predict traits in the other population. High prediction accuracies were obtained for TYM (0.51-0.84), whereas moderate to low accuracies were observed for SGNC (0.10-0.42) and SGNO (0.27-0.46) using cross-validation. Prediction accuracies based on independent validation are generally lower than those based on cross-validation. BayesB and SPLS outperformed all other models in predicting TYM with both cross-validation and independent validation. Although the accuracies of all models are similar in predicting SGNC and SGNO with cross-validation, BayesB and SPLS had the highest accuracy in predicting SGNC with independent validation. In independent validation, accuracies of all the models increased by using only the QTL-linked markers. Results from this study indicate that BayesB and SPLS capture the linkage disequilibrium between markers and traits effectively leading to higher accuracies. Excluding markers from QTL studies reduces prediction accuracies. PMID:26206349

  16. Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.

    2009-12-01

    Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The

  17. Geostatistical radar-raingauge merging: A novel method for the quantification of rain estimation accuracy

    NASA Astrophysics Data System (ADS)

    Delrieu, Guy; Wijbrans, Annette; Boudevillain, Brice; Faure, Dominique; Bonnifait, Laurent; Kirstetter, Pierre-Emmanuel

    2014-09-01

    Compared to other estimation techniques, one advantage of geostatistical techniques is that they provide an index of the estimation accuracy of the variable of interest with the kriging estimation standard deviation (ESD). In the context of radar-raingauge quantitative precipitation estimation (QPE), we address in this article the question of how the kriging ESD can be transformed into a local spread of error by using the dependency of radar errors to the rain amount analyzed in previous work. The proposed approach is implemented for the most significant rain events observed in 2008 in the Cévennes-Vivarais region, France, by considering both the kriging with external drift (KED) and the ordinary kriging (OK) methods. A two-step procedure is implemented for estimating the rain estimation accuracy: (i) first kriging normalized ESDs are computed by using normalized variograms (sill equal to 1) to account for the observation system configuration and the spatial structure of the variable of interest (rainfall amount, residuals to the drift); (ii) based on the assumption of a linear relationship between the standard deviation and the mean of the variable of interest, a denormalization of the kriging ESDs is performed globally for a given rain event by using a cross-validation procedure. Despite the fact that the KED normalized ESDs are usually greater than the OK ones (due to an additional constraint in the kriging system and a weaker spatial structure of the residuals to the drift), the KED denormalized ESDs are generally smaller the OK ones, a result consistent with the better performance observed for the KED technique. The evolution of the mean and the standard deviation of the rainfall-scaled ESDs over a range of spatial (5-300 km2) and temporal (1-6 h) scales demonstrates that there is clear added value of the radar with respect to the raingauge network for the shortest scales, which are those of interest for flash-flood prediction in the considered region.

  18. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  19. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  20. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  1. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  2. Phase segmentation of X-ray computer tomography rock images using machine learning techniques: an accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo

    2016-07-01

    Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.

  3. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  4. Accuracy of TCP performance models

    NASA Astrophysics Data System (ADS)

    Schwefel, Hans Peter; Jobmann, Manfred; Hoellisch, Daniel; Heyman, Daniel P.

    2001-07-01

    Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of N identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.

  5. Accuracy in determining voice source parameters

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2014-11-01

    The paper addresses the accuracy of an approximate solution to the inverse problem of retrieving the shape of a voice source from a speech signal for a known signal-to-noise ratio (SNR). It is shown that if the source is found as a function of time with the A.N. Tikhonov regularization method, the accuracy of the found approximation is worse than the accuracy of speech signal recording by an order of magnitude. In contrast, adequate parameterization of the source ensures approximate solution accuracy comparable with the accuracy of the problem data. A corresponding algorithm is considered. On the basis of linear (in terms of data errors) estimates of approximate parametric solution accuracy, parametric models with the best accuracy can be chosen. This comparison has been carried out for the known voice source models, i.e., model [17] and the LF model [18]. The advantages of the latter are shown. Thus, for SNR = 40 dB, the relative accuracy of an approximate solution found with this algorithm is about 1% for the LF model and about 2% for model [17] as compared to an accuracy of 7-8% in the regularization method. The role of accuracy estimates found in speaker identification problems is discussed.

  6. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  7. Determination of fetal state from cardiotocogram using LS-SVM with particle swarm optimization and binary decision tree.

    PubMed

    Yılmaz, Ersen; Kılıkçıer, Cağlar

    2013-01-01

    We use least squares support vector machine (LS-SVM) utilizing a binary decision tree for classification of cardiotocogram to determine the fetal state. The parameters of LS-SVM are optimized by particle swarm optimization. The robustness of the method is examined by running 10-fold cross-validation. The performance of the method is evaluated in terms of overall classification accuracy. Additionally, receiver operation characteristic analysis and cobweb representation are presented in order to analyze and visualize the performance of the method. Experimental results demonstrate that the proposed method achieves a remarkable classification accuracy rate of 91.62%.

  8. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  9. Accuracy in prescriptions compounded by pharmacy students.

    PubMed

    Shrewsbury, R P; Deloatch, K H

    1998-01-01

    Most compounded prescriptions are not analyzed to determine the accuracy of the employed instruments and procedures. The assumption is that the compounded prescription will be +/- 5% the labeled claim. Two classes of School of Pharmcacy students who received repeated instruction and supervision on proper compounding techniques and procedures were assessed to determine their accuracy of compounding a diphenhydramine hydrochloride prescription. After two attempts, only 62% to 68% of the students could compound the prescription within +/- 5% the labeled claim; but 84% to 96% could attain an accuracy of +/- 10%. The results suggest that an accuracy of +/- 10% labeled claim is the least variation a pharmacist can expect when extemporaneously compounding prescriptions.

  10. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  11. Measures of Diagnostic Accuracy: Basic Definitions

    PubMed Central

    Šimundić, Ana-Maria

    2009-01-01

    Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions and characteristics of the measures of diagnostic accuracy.

  12. Variable selection procedures before partial least squares regression enhance the accuracy of milk fatty acid composition predicted by mid-infrared spectroscopy.

    PubMed

    Gottardo, P; Penasa, M; Lopez-Villalobos, N; De Marchi, M

    2016-10-01

    Mid-infrared spectroscopy is a high-throughput technique that allows the prediction of milk quality traits on a large-scale. The accuracy of prediction achievable using partial least squares (PLS) regression is usually high for fatty acids (FA) that are more abundant in milk, whereas it decreases for FA that are present in low concentrations. Two variable selection methods, uninformative variable elimination or a genetic algorithm combined with PLS regression, were used in the present study to investigate their effect on the accuracy of prediction equations for milk FA profile expressed either as a concentration on total identified FA or a concentration in milk. For FA expressed on total identified FA, the coefficient of determination of cross-validation from PLS alone was low (0.25) for the prediction of polyunsaturated FA and medium (0.70) for saturated FA. The coefficient of determination increased to 0.54 and 0.95 for polyunsaturated and saturated FA, respectively, when FA were expressed on a milk basis and using PLS alone. Both algorithms before PLS regression improved the accuracy of prediction for FA, especially for FA that are usually difficult to predict; for example, the improvement with respect to the PLS regression ranged from 9 to 80%. In general, FA were better predicted when their concentrations were expressed on a milk basis. These results might favor the use of prediction equations in the dairy industry for genetic purposes and payment system. PMID:27522434

  13. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  14. A Function Accounting for Training Set Size and Marker Density to Model the Average Accuracy of Genomic Prediction

    PubMed Central

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments (). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5′698 Holstein Friesian bulls genotyped with 50 K SNPs and 1′332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2–10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is . The proportion of genetic variance captured by the complete SNP sets () was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20′000 SNPs in the Brown Swiss population studied. PMID:24339895

  15. Accuracy of Carbohydrate Counting in Adults.

    PubMed

    Meade, Lisa T; Rushton, Wanda E

    2016-07-01

    In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531

  16. Scientific Sources' Perception of Network News Accuracy.

    ERIC Educational Resources Information Center

    Moore, Barbara; Singletary, Michael

    Recent polls seem to indicate that many Americans rely on television as a credible and primary source of news. To test the accuracy of this news, a study examined three networks' newscasts of science news, the attitudes of the science sources toward reporting in their field, and the factors related to accuracy. The Vanderbilt News Archives Index…

  17. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  18. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  19. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  20. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  1. Accuracy and precision of manual baseline determination.

    PubMed

    Jirasek, A; Schulze, G; Yu, M M L; Blades, M W; Turner, R F B

    2004-12-01

    Vibrational spectra often require baseline removal before further data analysis can be performed. Manual (i.e., user) baseline determination and removal is a common technique used to perform this operation. Currently, little data exists that details the accuracy and precision that can be expected with manual baseline removal techniques. This study addresses this current lack of data. One hundred spectra of varying signal-to-noise ratio (SNR), signal-to-baseline ratio (SBR), baseline slope, and spectral congestion were constructed and baselines were subtracted by 16 volunteers who were categorized as being either experienced or inexperienced in baseline determination. In total, 285 baseline determinations were performed. The general level of accuracy and precision that can be expected for manually determined baselines from spectra of varying SNR, SBR, baseline slope, and spectral congestion is established. Furthermore, the effects of user experience on the accuracy and precision of baseline determination is estimated. The interactions between the above factors in affecting the accuracy and precision of baseline determination is highlighted. Where possible, the functional relationships between accuracy, precision, and the given spectral characteristic are detailed. The results provide users of manual baseline determination useful guidelines in establishing limits of accuracy and precision when performing manual baseline determination, as well as highlighting conditions that confound the accuracy and precision of manual baseline determination.

  2. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  3. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  4. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  5. Accuracy and consistency of modern elastomeric pumps.

    PubMed

    Weisman, Robyn S; Missair, Andres; Pham, Phung; Gutierrez, Juan F; Gebhard, Ralf E

    2014-01-01

    Continuous peripheral nerve blockade has become a popular method of achieving postoperative analgesia for many surgical procedures. The safety and reliability of infusion pumps are dependent on their flow rate accuracy and consistency. Knowledge of pump rate profiles can help physicians determine which infusion pump is best suited for their clinical applications and specific patient population. Several studies have investigated the accuracy of portable infusion pumps. Using methodology similar to that used by Ilfeld et al, we investigated the accuracy and consistency of several current elastomeric pumps. PMID:25140510

  6. High accuracy calibration of the fiber spectroradiometer

    NASA Astrophysics Data System (ADS)

    Wu, Zhifeng; Dai, Caihong; Wang, Yanfei; Chen, Binhua

    2014-11-01

    Comparing to the big-size scanning spectroradiometer, the compact and convenient fiber spectroradiometer is widely used in various kinds of fields, such as the remote sensing, aerospace monitoring, and solar irradiance measurement. High accuracy calibration should be made before the use, which involves the wavelength accuracy, the background environment noise, the nonlinear effect, the bandwidth, the stray light and et al. The wavelength lamp and tungsten lamp are frequently used to calibration the fiber spectroradiometer. The wavelength difference can be easily reduced through the software or calculation. However, the nonlinear effect and the bandwidth always can affect the measurement accuracy significantly.

  7. Measuring the Accuracy of Diagnostic Systems.

    ERIC Educational Resources Information Center

    Swets, John A.

    1988-01-01

    Discusses the relative operating characteristic analysis of signal detection theory as a measure of diagnostic accuracy. Reports representative values of this measure in several fields. Compares how problems in these fields are handled. (CW)

  8. Empathic Embarrassment Accuracy in Autism Spectrum Disorder.

    PubMed

    Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G

    2015-06-01

    Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. PMID:25732043

  9. Critical thinking and accuracy of nurses' diagnoses.

    PubMed

    Lunney, Margaret

    2003-01-01

    Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities.

  10. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  11. Sun-pointing programs and their accuracy

    SciTech Connect

    Zimmerman, J.C.

    1981-05-01

    Several sun-pointing programs and their accuracy are described. FORTRAN program listings are given. Program descriptions are given for both Hewlett-Packard (HP-67) and Texas Instruments (TI-59) hand-held calculators.

  12. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  13. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  14. Cross-Validation of the Self-Motivation Inventory.

    ERIC Educational Resources Information Center

    Heiby, Elaine M.; And Others

    Because the literature suggests that aerobic exercise is associated with physical health and psychological well-being, there is a concern with discovering how to improve adherence to such exercise. There is growing evidence that self-motivation, as measured by the Dishman Self-Motivation Inventory (SMI), is a redictor of adherence to regular…

  15. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  16. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore, preliminary analyses should be carried out under the control and authority of an established international professional Organization or Association, before any final political decision is made by ISO to select a specific Environmental Models, like for example IGRF and DGRF. Of course, Commissions responsible for checking the consistency of definitions, methods and algorithms for data processing might consider to delegate specific tasks (e.g. bench-marking the technical tools, the calibration procedures, the methods of data analysis, and the software algorithms employed in building the different types of models, as well as their usage) to private, intergovernmental or international organization/agencies (e.g.: NASA, ESA, AGU, EGU, COSPAR, . . . ); eventually, the latter should report conclusions to the Commissions members appointed by IAGA or any established authority like IUGG.

  17. [Cross validity of the UCLA Loneliness Scale factorization].

    PubMed

    Borges, Africa; Prieto, Pedro; Ricchetti, Giacinto; Hernández-Jorge, Carmen; Rodríguez-Naveiras, Elena

    2008-11-01

    Loneliness is an unpleasant experience that takes place when a person's network of social relationships is significantly deficient in quality and quantity, and it is associated with negative feelings. Loneliness is a fundamental construct that provides information about several psychological processes, especially in the clinical setting. It is well known that this construct is related to isolation and emotional loneliness. One of the most well-known psychometric instruments to measure loneliness is the revised UCLA Loneliness Scale, which has been factorized in several populations. A controversial issue related to the UCLA Loneliness Scale is its factor structure, because the test was first created based on a unidimensional structure; however, subsequent research has proved that its structure may be bipolar or even multidimensional. In the present work, the UCLA Loneliness Scale was completed by two populations: Spanish and Italian undergraduate university students. Results show a multifactorial structure in both samples. This research presents a theoretically and analytically coherent bifactorial structure. PMID:18940104

  18. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  19. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  20. Effect of flying altitude, scanning angle and scanning mode on the accuracy of ALS based forest inventory

    NASA Astrophysics Data System (ADS)

    Keränen, Juha; Maltamo, Matti; Packalen, Petteri

    2016-10-01

    Airborne laser scanning (ALS) is a widely used technology in the mapping of environment and forests. Data acquisition costs and the accuracy of the forest inventory are closely dependent on some extrinsic parameters of the ALS survey. These parameters have been assessed in numerous studies about a decade ago, but since then ALS devices have developed and it is possible that previous findings do not hold true with newer technology. That is why, the effect of flying altitudes (2000, 2500 or 3000 m), scanning angles (±15° and ±20° off nadir) and scanning modes (single- and multiple pulses in air) with the area-based approach using a Leica ALS70HA-laser scanner was studied here. The study was conducted in a managed pine-dominated forest area in Finland, where eight separate discrete-return ALS data were acquired. The comparison of datasets was based on the bootstrap approach with 5-fold cross validation. Results indicated that the narrower scanning angle (±15° i.e. 30°) led to slightly more accurate estimates of plot volume (RMSE%: 21-24 vs. 22.5-25) and mean height (RMSE%: 8.5-11 vs. 9-12). We also tested the use case where the models are constructed using one data and then applied to other data gathered with different parameters. The most accurate models were identified using the bootstrap approach and applied to different datasets with and without refitting. The bias increased without refitting the models (bias%: volume 0 ± 10, mean height 0 ± 3), but in most cases the results did not differ much in terms of RMSE%. This confirms previous observations that models should only be used for datasets collected under similar data acquisition conditions. We also calculated the proportions of echoes as a function of height for different echo categories. This indicated that the accuracy of the inventory is affected more by the height distribution than the proportions of echo categories.

  1. Effect of species rarity on the accuracy of species distribution models for reptiles and amphibians in southern California

    USGS Publications Warehouse

    Franklin, J.; Wejnert, K.E.; Hathaway, S.A.; Rochester, C.J.; Fisher, R.N.

    2009-01-01

    Aim: Several studies have found that more accurate predictive models of species' occurrences can be developed for rarer species; however, one recent study found the relationship between range size and model performance to be an artefact of sample prevalence, that is, the proportion of presence versus absence observations in the data used to train the model. We examined the effect of model type, species rarity class, species' survey frequency, detectability and manipulated sample prevalence on the accuracy of distribution models developed for 30 reptile and amphibian species. Location: Coastal southern California, USA. Methods: Classification trees, generalized additive models and generalized linear models were developed using species presence and absence data from 420 locations. Model performance was measured using sensitivity, specificity and the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot based on twofold cross-validation, or on bootstrapping. Predictors included climate, terrain, soil and vegetation variables. Species were assigned to rarity classes by experts. The data were sampled to generate subsets with varying ratios of presences and absences to test for the effect of sample prevalence. Join count statistics were used to characterize spatial dependence in the prediction errors. Results: Species in classes with higher rarity were more accurately predicted than common species, and this effect was independent of sample prevalence. Although positive spatial autocorrelation remained in the prediction errors, it was weaker than was observed in the species occurrence data. The differences in accuracy among model types were slight. Main conclusions: Using a variety of modelling methods, more accurate species distribution models were developed for rarer than for more common species. This was presumably because it is difficult to discriminate suitable from unsuitable habitat for habitat generalists, and not as an artefact of the

  2. A strategy for multivariate calibration based on modified single-index signal regression: Capturing explicit non-linearity and improving prediction accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun

    2013-11-01

    In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.

  3. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  4. Accuracy of polyp localization at colonoscopy

    PubMed Central

    O’Connor, Sam A.; Hewett, David G.; Watson, Marcus O.; Kendall, Bradley J.; Hourigan, Luke F.; Holtmann, Gerald

    2016-01-01

    Background and study aims: Accurate documentation of lesion localization at the time of colonoscopic polypectomy is important for future surveillance, management of complications such as delayed bleeding, and for guiding surgical resection. We aimed to assess the accuracy of endoscopic localization of polyps during colonoscopy and examine variables that may influence this accuracy. Patients and methods: We conducted a prospective observational study in consecutive patients presenting for elective, outpatient colonoscopy. All procedures were performed by Australian certified colonoscopists. The endoscopic location of each polyp was reported by the colonoscopist at the time of resection and prospectively recorded. Magnetic endoscope imaging was used to determine polyp location, and colonoscopists were blinded to this image. Three experienced colonoscopists, blinded to the endoscopist’s assessment of polyp location, independently scored the magnetic endoscope images to obtain a reference standard for polyp location (Cronbach alpha 0.98). The accuracy of colonoscopist polyp localization using this reference standard was assessed, and colonoscopist, procedural and patient variables affecting accuracy were evaluated. Results: A total of 155 patients were enrolled and 282 polyps were resected in 95 patients by 14 colonoscopists. The overall accuracy of polyp localization was 85 % (95 % confidence interval, CI; 60 – 96 %). Accuracy varied significantly (P < 0.001) by colonic segment: caecum 100 %, ascending 77 % (CI;65 – 90), transverse 84 % (CI;75 – 92), descending 56 % (CI;32 – 81), sigmoid 88 % (CI;79 – 97), rectum 96 % (CI;90 – 101). There were significant differences in accuracy between colonoscopists (P < 0.001), and colonoscopist experience was a significant independent predictor of accuracy (OR 3.5, P = 0.028) after adjustment for patient and procedural variables. Conclusions: Accuracy of

  5. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  6. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  7. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  8. A multiscale decomposition approach to detect abnormal vasculature in the optic disc.

    PubMed

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-07-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. PMID:25698545

  9. A Multiscale Decomposition Approach to Detect Abnormal Vasculature in the Optic Disc

    PubMed Central

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S.; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-01-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. PMID:25698545

  10. Decreased interoceptive accuracy following social exclusion.

    PubMed

    Durlik, Caroline; Tsakiris, Manos

    2015-04-01

    The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592

  11. Social class, contextualism, and empathic accuracy.

    PubMed

    Kraus, Michael W; Côté, Stéphane; Keltner, Dacher

    2010-11-01

    Recent research suggests that lower-class individuals favor explanations of personal and political outcomes that are oriented to features of the external environment. We extended this work by testing the hypothesis that, as a result, individuals of a lower social class are more empathically accurate in judging the emotions of other people. In three studies, lower-class individuals (compared with upper-class individuals) received higher scores on a test of empathic accuracy (Study 1), judged the emotions of an interaction partner more accurately (Study 2), and made more accurate inferences about emotion from static images of muscle movements in the eyes (Study 3). Moreover, the association between social class and empathic accuracy was explained by the tendency for lower-class individuals to explain social events in terms of features of the external environment. The implications of class-based patterns in empathic accuracy for well-being and relationship outcomes are discussed. PMID:20974714

  12. Size-Dependent Accuracy of Nanoscale Thermometers.

    PubMed

    Alicki, Robert; Leitner, David M

    2015-07-23

    The accuracy of two classes of nanoscale thermometers is estimated in terms of size and system-dependent properties using the spin-boson model. We consider solid state thermometers, where the energy splitting is tuned by thermal properties of the material, and fluorescent organic thermometers, in which the fluorescence intensity depends on the thermal population of conformational states of the thermometer. The results of the theoretical model compare well with the accuracy reported for several nanothermometers that have been used to measure local temperature inside living cells.

  13. Predictive accuracy in the neuroprediction of rearrest

    PubMed Central

    Aharoni, Eyal; Mallett, Joshua; Vincent, Gina M.; Harenski, Carla L.; Calhoun, Vince D.; Sinnott-Armstrong, Walter; Gazzaniga, Michael S.; Kiehl, Kent A.

    2014-01-01

    A recently published study by the present authors (Aharoni et al., 2013) reported evidence that functional changes in the anterior cingulate cortex (ACC) within a sample of 96 criminal offenders who were engaged in a Go/No-Go impulse control task significantly predicted their rearrest following release from prison. In an extended analysis, we use discrimination and calibration techniques to test the accuracy of these predictions relative to more traditional models and their ability to generalize to new observations in both full and reduced models. Modest to strong discrimination and calibration accuracy were found, providing additional support for the utility of neurobiological measures in predicting rearrest. PMID:24720689

  14. The accuracy of Halley's cometary orbits

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    The accuracy of a scientific computation depends in the main on the data fed in and the analysis method used. This statement is certainly true of Edmond Halley's cometary orbit work. Considering the 420 comets that had been seen before Halley's era of orbital calculation (1695 - 1702) only 24, according to him, had been observed well enough for their orbits to be calculated. Two questions are considered in this paper. Do all the orbits listed by Halley have the same accuracy? and, secondly, how accurate was Halley's method of calculation?

  15. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  16. Phase space correlation to improve detection accuracy.

    PubMed

    Carroll, T L; Rachford, F J

    2009-09-01

    The standard method used for detecting signals in radar or sonar is cross correlation. The accuracy of the detection with cross correlation is limited by the bandwidth of the signals. We show that by calculating the cross correlation based on points that are nearby in phase space rather than points that are simultaneous in time, the detection accuracy is improved. The phase space correlation technique works for some standard radar signals, but it is especially well suited to chaotic signals because trajectories that are adjacent in phase space move apart from each other at an exponential rate.

  17. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  18. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  19. Metrical Patterns of Words and Production Accuracy.

    ERIC Educational Resources Information Center

    Schwartz, Richard G.; Goffman, Lisa

    1995-01-01

    This study examined the influence of metrical patterns (syllable stress and serial position) of words on the production accuracy of 20 children (ages 22 months to 28 months). Among results were that one-fourth of the initial unstressed syllables were omitted and that consonant omissions, though few, tended to occur in the initial position.…

  20. The Accuracy of Academic Gender Stereotypes.

    ERIC Educational Resources Information Center

    Beyer, Sylvia

    1999-01-01

    Assessed the accuracy of academic gender stereotypes by asking 265 college students to estimate the percentage of male and female students and their grade point averages (GPAs) and comparing these to the actual percentage of male and female students and GPAs. Results show the inaccuracies of academic gender stereotypes. (SLD)

  1. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  2. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  3. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  4. Accuracy of Information Processing under Focused Attention.

    ERIC Educational Resources Information Center

    Bastick, Tony

    This paper reports the results of an experiment on the accuracy of information processing during attention focused arousal under two conditions: single estimation and double estimation. The attention of 187 college students was focused by a task requiring high level competition for a monetary prize ($10) under severely limited time conditions. The…

  5. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  6. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  7. Accuracy investigation of phthalate metabolite standards.

    PubMed

    Langlois, Éric; Leblanc, Alain; Simard, Yves; Thellen, Claude

    2012-05-01

    Phthalates are ubiquitous compounds whose metabolites are usually determined in urine for biomonitoring studies. Following suspect and unexplained results from our laboratory in an external quality-assessment scheme, we investigated the accuracy of all phthalate metabolite standards in our possession by comparing them with those of several suppliers. Our findings suggest that commercial phthalate metabolite certified solutions are not always accurate and that lot-to-lot discrepancies significantly affect the accuracy of the results obtained with several of these standards. These observations indicate that the reliability of the results obtained from different lots of standards is not equal, which reduces the possibility of intra-laboratory and inter-laboratory comparisons of results. However, agreements of accuracy have been observed for a majority of neat standards obtained from different suppliers, which indicates that a solution to this issue is available. Data accuracy of phthalate metabolites should be of concern for laboratories performing phthalate metabolite analysis because of the standards used. The results of our investigation are presented from the perspective that laboratories performing phthalate metabolite analysis can obtain accurate and comparable results in the future. Our findings will contribute to improving the quality of future phthalate metabolite analyses and will affect the interpretation of past results.

  8. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  9. Task Speed and Accuracy Decrease When Multitasking

    ERIC Educational Resources Information Center

    Lin, Lin; Cockerham, Deborah; Chang, Zhengsi; Natividad, Gloria

    2016-01-01

    As new technologies increase the opportunities for multitasking, the need to understand human capacities for multitasking continues to grow stronger. Is multitasking helping us to be more efficient? This study investigated the multitasking abilities of 168 participants, ages 6-72, by measuring their task accuracy and completion time when they…

  10. High Accuracy Transistor Compact Model Calibrations

    SciTech Connect

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  11. Adult Metacomprehension: Judgment Processes and Accuracy Constraints

    ERIC Educational Resources Information Center

    Zhao, Qin; Linderholm, Tracy

    2008-01-01

    The objective of this paper is to review and synthesize two interrelated topics in the adult metacomprehension literature: the bases of metacomprehension judgment and the constraints on metacomprehension accuracy. Our review shows that adult readers base their metacomprehension judgments on different types of information, including experiences…

  12. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  13. Accuracy Of Stereometry In Assessing Orthognathic Surgery

    NASA Astrophysics Data System (ADS)

    King, Geoffrey E.; Bays, R. A.

    1983-07-01

    An X-ray stereometric technique has been developed for the determination of 3-dimensional coordinates of spherical metallic markers previously implanted in monkey skulls. The accuracy of the technique is better than 0.5mm. and uses readily available demountable X-ray equipment. The technique is used to study the effects and stability of experimental orthognathic surgery.

  14. Proper installation ensures turbine meter accuracy

    SciTech Connect

    Peace, D.W.

    1995-07-01

    Turbine meters are widely used for natural gas measurement and provide high accuracy over large ranges of operation. However, as with many other types of flowmeters, consideration must be given to the design of the turbine meter and the installation piping practice to ensure high-accuracy measurement. National and international standards include guidelines for proper turbine meter installation piping and methods for evaluating the effects of flow disturbances on the design of those meters. Swirl or non-uniform velocity profiles, such as jetting, at the turbine meter inlet can cause undesirable accuracy performance changes. Sources of these types of flow disturbances can be from the installation piping configuration, an upstream regulator, a throttled valve, or a partial blockage upstream of the meter. Test results on the effects of swirl and jetting on different types of meter designs and sizes emphasize the need to consider good engineering design for turbine meters, including integral flow conditioning vanes and adequate installation piping practices for high accuracy measurement.

  15. Direct Behavior Rating: Considerations for Rater Accuracy

    ERIC Educational Resources Information Center

    Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.

    2014-01-01

    Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…

  16. Highly Spinning Initial Data: Gauges and Accuracy

    NASA Astrophysics Data System (ADS)

    Zlochower, Yosef; Ruchlin, Ian; Healy, James; Lousto, Carlos

    2016-03-01

    We recently developed a code for solving the 3+1 system of constraints for highly-spinning black-hole binary initial data in the puncture formalism. Here we explore how different choices of gauge for the background metric improve both the efficiency and accuracy of the initial data solver and the subsequent fully nonlinear numerical evolutions of these data.

  17. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  18. The impact of accuracy motivation on interpretation, comparison, and correction processes: accuracy x knowledge accessibility effects.

    PubMed

    Stapel, D A; Koomen, W; Zeelenberg, M

    1998-04-01

    Four studies provide evidence for the notion that there may be boundaries to the extent to which accuracy motivation may help perceivers to escape the influence of fortuitously activated information. Specifically, although accuracy motivations may eliminate assimilative accessibility effects, they are less likely to eliminate contrastive accessibility effects. It was found that the occurrence of different types of contrast effects (comparison and correction) was not significantly affected by participants' accuracy motivations. Furthermore, it was found that the mechanisms instigated by accuracy motivations differ from those ignited by correction instructions: Accuracy motivations attenuate assimilation effects because perceivers add target interpretations to the one suggested by primed information. Conversely, it was found that correction instructions yield contrast and prompt respondents to remove the priming event's influence from their reaction to the target. PMID:9569650

  19. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  20. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  1. Do saccharide doped PAGAT dosimeters increase accuracy?

    NASA Astrophysics Data System (ADS)

    Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.

    2015-01-01

    To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.

  2. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  3. Accuracy of NHANES periodontal examination protocols.

    PubMed

    Eke, P I; Thornton-Evans, G O; Wei, L; Borgnakke, W S; Dye, B A

    2010-11-01

    This study evaluates the accuracy of periodontitis prevalence determined by the National Health and Nutrition Examination Survey (NHANES) partial-mouth periodontal examination protocols. True periodontitis prevalence was determined in a new convenience sample of 454 adults ≥ 35 years old, by a full-mouth "gold standard" periodontal examination. This actual prevalence was compared with prevalence resulting from analysis of the data according to the protocols of NHANES III and NHANES 2001-2004, respectively. Both NHANES protocols substantially underestimated the prevalence of periodontitis by 50% or more, depending on the periodontitis case definition used, and thus performed below threshold levels for moderate-to-high levels of validity for surveillance. Adding measurements from lingual or interproximal sites to the NHANES 2001-2004 protocol did not improve the accuracy sufficiently to reach acceptable sensitivity thresholds. These findings suggest that NHANES protocols produce high levels of misclassification of periodontitis cases and thus have low validity for surveillance and research.

  4. Accuracy of forecasts in strategic intelligence

    PubMed Central

    Mandel, David R.; Barnes, Alan

    2014-01-01

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176

  5. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  6. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  7. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  8. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  9. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  10. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  11. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  12. Speed versus accuracy in collective decision making.

    PubMed

    Franks, Nigel R; Dornhaus, Anna; Fitzsimmons, Jon P; Stevens, Martin

    2003-12-01

    We demonstrate a speed versus accuracy trade-off in collective decision making. House-hunting ant colonies choose a new nest more quickly in harsh conditions than in benign ones and are less discriminating. The errors that occur in a harsh environment are errors of judgement not errors of omission because the colonies have discovered all of the alternative nests before they initiate an emigration. Leptothorax albipennis ants use quorum sensing in their house hunting. They only accept a nest, and begin rapidly recruiting members of their colony, when they find within it a sufficient number of their nest-mates. Here we show that these ants can lower their quorum thresholds between benign and harsh conditions to adjust their speed-accuracy trade-off. Indeed, in harsh conditions these ants rely much more on individual decision making than collective decision making. Our findings show that these ants actively choose to take their time over judgements and employ collective decision making in benign conditions when accuracy is more important than speed.

  13. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  14. Speed versus accuracy in collective decision making.

    PubMed Central

    Franks, Nigel R; Dornhaus, Anna; Fitzsimmons, Jon P; Stevens, Martin

    2003-01-01

    We demonstrate a speed versus accuracy trade-off in collective decision making. House-hunting ant colonies choose a new nest more quickly in harsh conditions than in benign ones and are less discriminating. The errors that occur in a harsh environment are errors of judgement not errors of omission because the colonies have discovered all of the alternative nests before they initiate an emigration. Leptothorax albipennis ants use quorum sensing in their house hunting. They only accept a nest, and begin rapidly recruiting members of their colony, when they find within it a sufficient number of their nest-mates. Here we show that these ants can lower their quorum thresholds between benign and harsh conditions to adjust their speed-accuracy trade-off. Indeed, in harsh conditions these ants rely much more on individual decision making than collective decision making. Our findings show that these ants actively choose to take their time over judgements and employ collective decision making in benign conditions when accuracy is more important than speed. PMID:14667335

  15. Piezoresistive position microsensors with ppm-accuracy

    NASA Astrophysics Data System (ADS)

    Stavrov, Vladimir; Shulev, Assen; Stavreva, Galina; Todorov, Vencislav

    2015-05-01

    In this article, the relation between position accuracy and the number of simultaneously measured values, such as coordinates, has been analyzed. Based on this, a conceptual layout of MEMS devices (microsensors) for multidimensional position monitoring comprising a single anchored and a single actuated part has been developed. Both parts are connected with a plurality of micromechanical flexures, and each flexure includes position detecting cantilevers. Microsensors having detecting cantilevers oriented in X and Y direction have been designed and prototyped. Experimentally measured results at characterization of 1D, 2D and 3D position microsensors are reported as well. Exploiting different flexure layouts, a travel range between 50μm and 1.8mm and sensors' sensitivity in the range between 30μV/μm and 5mV/μm@ 1V DC supply voltage have been demonstrated. A method for accurate calculation of all three Cartesian coordinates, based on measurement of at least three microsensors' signals has also been described. The analyses of experimental results prove the capability of position monitoring with ppm-(part per million) accuracy. The technology for fabrication of MEMS devices with sidewall embedded piezoresistors removes restrictions in strong improvement of their usability for position sensing with a high accuracy. The present study is, also a part of a common strategy for developing a novel MEMS-based platform for simultaneous accurate measurement of various physical values when they are transduced to a change of position.

  16. Ultrasonic flowmeters undergo accuracy, repeatability tests

    SciTech Connect

    Grimley, T.A.

    1996-12-23

    Two commercially available multipath ultrasonic flowmeters have undergone tests at Gas Research Institute`s metering research facility (MRF) at Southwest Research institute in San Antonio. The tests were conducted in baseline and disturbed-flow installations to assess baseline accuracy and repeatability over a range of flowrates and pressures. Results show the test meters are capable of accuracies within a 1% tolerance and with repeatability of better than 0.25% when the flowrate is greater than about 5% of capacity. The data also indicates that pressure may have an effect on meter error. Results further suggest that both the magnitude and character of errors introduced by flow disturbances are a function of meter design. Shifts of up to 0.6% were measured for meters installed 10D from a tee (1D = 1 pipe diameter). Better characterization of the effects of flow disturbances on measurement accuracy is needed to define more accurately the upstream piping requirements necessary to achieve meter performance within a specified tolerance. The paper discusses reduced station costs, test methods, baseline tests, effect of pressure, speed of sound, and disturbance tests.

  17. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    PubMed Central

    Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  18. Intelligence: The Speed and Accuracy Tradeoff in High Aptitude Individuals.

    ERIC Educational Resources Information Center

    Lajoie, Suzanne P.; Shore, Bruce M.

    1986-01-01

    The relative contributions of mental speed and accuracy to Primary Mental Ability (PMA) IQ prediction were studied in 52 high ability grade 10 students. Both speed and accuracy independently predicted IQ, but not speed over and above accuracy. Accuracy was demonstrated to be universally advantageous in IQ performance, but speed varied according to…

  19. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation). PMID:27386623

  20. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).

  1. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  2. Accuracy requirements. [for monitoring of climate changes

    NASA Technical Reports Server (NTRS)

    Delgenio, Anthony

    1993-01-01

    Satellite and surface measurements, if they are to serve as a climate monitoring system, must be accurate enough to permit detection of changes of climate parameters on decadal time scales. The accuracy requirements are difficult to define a priori since they depend on unknown future changes of climate forcings and feedbacks. As a framework for evaluation of candidate Climsat instruments and orbits, we estimate the accuracies that would be needed to measure changes expected over two decades based on theoretical considerations including GCM simulations and on observational evidence in cases where data are available for rates of change. One major climate forcing known with reasonable accuracy is that caused by the anthropogenic homogeneously mixed greenhouse gases (CO2, CFC's, CH4 and N2O). Their net forcing since the industrial revolution began is about 2 W/sq m and it is presently increasing at a rate of about 1 W/sq m per 20 years. Thus for a competing forcing or feedback to be important, it needs to be of the order of 0.25 W/sq m or larger on this time scale. The significance of most climate feedbacks depends on their sensitivity to temperature change. Therefore we begin with an estimate of decadal temperature change. Presented are the transient temperature trends simulated by the GISS GCM when subjected to various scenarios of trace gas concentration increases. Scenario B, which represents the most plausible near-term emission rates and includes intermittent forcing by volcanic aerosols, yields a global mean surface air temperature increase Delta Ts = 0.7 degrees C over the time period 1995-2015. This is consistent with the IPCC projection of about 0.3 degrees C/decade global warming (IPCC, 1990). Several of our estimates below are based on this assumed rate of warming.

  3. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  4. Accuracy and reliability of China's energy statistics

    SciTech Connect

    Sinton, Jonathan E.

    2001-09-14

    Many observers have raised doubts about the accuracy and reliability of China's energy statistics, which show an unprecedented decline in recent years, while reported economic growth has remained strong. This paper explores the internal consistency of China's energy statistics from 1990 to 2000, coverage and reporting issues, and the state of the statistical reporting system. Available information suggests that, while energy statistics were probably relatively good in the early 1990s, their quality has declined since the mid-1990s. China's energy statistics should be treated as a starting point for analysis, and explicit judgments regarding ranges of uncertainty should accompany any conclusions.

  5. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  6. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  7. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries. PMID:26300353

  8. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  9. Measuring the Accuracy of Diagnostic Systems

    NASA Astrophysics Data System (ADS)

    Swets, John A.

    1988-06-01

    Diagnostic systems of several kinds are used to distinguish between two classes of events, essentially ``signals'' and ``noise.'' For then, analysis in terms of the ``relative operating characteristic'' of signal detection theory provides a precise and valid measure of diagnostic accuracy. It is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performances of diverse systems on a common, easily interpreted scale. Representative values of this measure are reported here for systems in medical imaging, materials testing, weather forecasting, information retrieval, polygraph lie detection, and aptitude testing. Though the measure itself is sound, the values obtained from tests of diagnostic systems often require qualification because the test data on which they are based are of unsure quality. A common set of problems in testing is faced in all fields. How well these problems are handled, or can be handled in a given field, determines the degree of confidence that can be placed in a measured value of accuracy. Some fields fare much better than others.

  10. Curation accuracy of model organism databases.

    PubMed

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org//

  11. Millimeter accuracy satellites for two color ranging

    NASA Technical Reports Server (NTRS)

    Degnan, John J.

    1993-01-01

    The principal technical challenge in designing a millimeter accuracy satellite to support two color observations at high altitudes is to provide high optical cross-section simultaneously with minimal pulse spreading. In order to address this issue, we provide, a brief review of some fundamental properties of optical retroreflectors when used in spacecraft target arrays, develop a simple model for a spherical geodetic satellite, and use the model to determine some basic design criteria for a new generation of geodetic satellites capable of supporting millimeter accuracy two color laser ranging. We find that increasing the satellite diameter provides: a larger surface area for additional cube mounting thereby leading to higher cross-sections; and makes the satellite surface a better match for the incoming planar phasefront of the laser beam. Restricting the retroreflector field of view (e.g. by recessing it in its holder) limits the target response to the fraction of the satellite surface which best matches the optical phasefront thereby controlling the amount of pulse spreading. In surveying the arrays carried by existing satellites, we find that European STARLETTE and ERS-1 satellites appear to be the best candidates for supporting near term two color experiments in space.

  12. Accuracy requirements in radiotherapy treatment planning.

    PubMed

    Buzdar, Saeed Ahmad; Afzal, Muhammad; Nazir, Aalia; Gadhi, Muhammad Asghar

    2013-06-01

    Radiation therapy attempts to deliver ionizing radiation to the tumour and can improve the survival chances and/or quality of life of patients. There are chances of errors and uncertainties in the entire process of radiotherapy that may affect the accuracy and precision of treatment management and decrease degree of conformation. All expected inaccuracies, like radiation dose determination, volume calculation, complete evaluation of the full extent of the tumour, biological behaviour of specific tumour types, organ motion during radiotherapy, imaging, biological/molecular uncertainties, sub-clinical diseases, microscopic spread of the disease, uncertainty in normal tissue responses and radiation morbidity need sound appreciation. Conformity can be increased by reduction of such inaccuracies. With the yearly increase in computing speed and advancement in other technologies the future will provide the opportunity to optimize a greater number of variables and reduce the errors in the treatment planning process. In multi-disciplined task of radiotherapy, efforts are needed to overcome the errors and uncertainty, not only by the physicists but also by radiologists, pathologists and oncologists to reduce molecular and biological uncertainties. The radiation therapy physics is advancing towards an optimal goal that is definitely to improve accuracy where necessary and to reduce uncertainty where possible.

  13. The Accuracy of WFPC2 Photometric Zeropoints

    NASA Astrophysics Data System (ADS)

    Heyer, I.; Richardson, M.; Whitmore, B.; Lubin, L.

    2004-07-01

    The accuracy of WFPC2 photometric zeropoints is examined using two methods. The first approach compares the zeropoints from five sources: Holtzman (1995), the HST Data Handbook (1995 and 2002 versions), and Dolphin (both 2000 and 2002 versions). We find the rms scatter between the different studies to be: 0.043 mag for F336W, 0.034 mag for F439W, 0.016 mag for F555W, and 0.018 mag for F814W. The second approach is a comparison of WFPC2 observations of NGC2419 with ground-based photometry from Stetson (from his website) and Saha et al. (private communication). The agreement between these comparisons is similar to the historical zeropoint comparisons. Hence we conclude that the true uncertainty of WFPC2 zeropoints is currently about 0.02-0.04 magnitudes, with some dependence on filter. The largest errors seen are 0.07 magnitudes. Since Poisson statistics would predict that 1% absolute accuracy should be attainable, we conclude that there are still systematic error sources which have not yet been identified.

  14. The Accuracy of WFPC2 Photometric Zeropoints

    NASA Astrophysics Data System (ADS)

    Heyer, I.; Richardson, M.; Whitmore, B.; Lubin, L.

    2002-12-01

    The accuracy of WFPC2 photometric zeropoints is examined using two methods. The first approach compares the zeropoints from five sources: Holtzman (1995), the HST Data Handbook (1995 and 2002 versions), and Dolphin (both 2000 and 2002 versions). We find the mean scatter between the different studies to be: 0.043 mag for F336W, 0.034 mag for F439W, 0.016 mag for F555W, and 0.018 mag for F814W. The second approach is a comparison of WFPC2 observations of NGC2419 with ground-based photometry from Stetson (from his website) and Saha et al. (private communication). The agreement between these comparisons is similar to the historical zeropoint comparisons. Hence we conclude that the true uncertainty of WFPC2 zeropoints is currently about 0.02-0.04 magnitudes. Since Poisson statistics would predict that 1% absolute accuracy should be attainable, we conclude that there are still systematic error sources which have not yet been identified.

  15. The Accuracy of WFPC2 Photometric Zeropoints

    NASA Astrophysics Data System (ADS)

    Heyer, I.; Richardson, M.; Whitmore, B. C.; Lubin, L. M.

    The accuracy of WFPC2 photometric zeropoints is examined using two methods. The first approach compares the zeropoints from five sources: Holtzman (1995), the HST Data Handbook (1995 and 2002 versions), and Dolphin (both 2000 and 2002 versions). We find the mean scatter between the different studies to be: 0.043 mag for F336W, 0.034 mag for F439W, 0.016 mag for F555W, and 0.018 mag for F814W. The second approach is a comparison of WFPC2 observations of NGC2419 with ground-based photometry from Stetson (from his website) and Saha et al. (private communication). The tentative agreement between these comparisons is similar to the historical zeropoint comparisons. Hence we conclude that the true uncertainty of WFPC2 zeropoints is currently about 0.02-0.03 magnitudes. Since Poisson statistics would predict that 1% absolute accuracy should be attainable, we conclude that there are still systematic error sources which have not yet been identified.

  16. High accuracy electronic material level sensor

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.

  17. High accuracy electronic material level sensor

    DOEpatents

    McEwan, T.E.

    1997-03-11

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.

  18. Accuracy of CNV Detection from GWAS Data

    PubMed Central

    Zhang, Dandan; Qian, Yudong; Akula, Nirmala; Alliey-Rodriguez, Ney; Tang, Jinsong; Gershon, Elliot S.; Liu, Chunyu

    2011-01-01

    Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites—Birdsuite, Partek, HelixTree, and PennCNV-Affy—in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two “gold standards,” the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a “gold standard” for detection of CNVs remains to be established. PMID:21249187

  19. The neural bases of empathic accuracy

    PubMed Central

    Zaki, Jamil; Weber, Jochen; Bolger, Niall; Ochsner, Kevin

    2009-01-01

    Theories of empathy suggest that an accurate understanding of another's emotions should depend on affective, motor, and/or higher cognitive brain regions, but until recently no experimental method has been available to directly test these possibilities. Here, we present a functional imaging paradigm that allowed us to address this issue. We found that empathically accurate, as compared with inaccurate, judgments depended on (i) structures within the human mirror neuron system thought to be involved in shared sensorimotor representations, and (ii) regions implicated in mental state attribution, the superior temporal sulcus and medial prefrontal cortex. These data demostrate that activity in these 2 sets of brain regions tracks with the accuracy of attributions made about another's internal emotional state. Taken together, these results provide both an experimental approach and theoretical insights for studying empathy and its dysfunction. PMID:19549849

  20. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  1. Guiding Center Equations of High Accuracy

    SciTech Connect

    R.B. White, G. Spizzo and M. Gobbin

    2013-03-29

    Guiding center simulations are an important means of predicting the effect of resistive and ideal magnetohydrodynamic instabilities on particle distributions in toroidal magnetically confined thermonuclear fusion research devices. Because saturated instabilities typically have amplitudes of δ B/B of a few times 10-4 numerical accuracy is of concern in discovering the effect of mode particle resonances. We develop a means of following guiding center orbits which is greatly superior to the methods currently in use. In the presence of ripple or time dependent magnetic perturbations both energy and canonical momentum are conserved to better than one part in 1014, and the relation between changes in canonical momentum and energy is also conserved to very high order.

  2. Accuracy of the Cloud Integrating Nephelometer

    NASA Technical Reports Server (NTRS)

    Gerber, Hermann E.

    2004-01-01

    Potential error sources for measurements with the Cloud Integrating Nephelometer (CIN) are discussed and analyzed, including systematic errors of the measurement approach, flow and particle-trajectory deviations at flight velocity, ice-crystal breakup on probe surfaces, and errors in calibration and developing scaling constants. It is concluded that errors are minimal, and that the accuracy of the CIN should be close to the systematic behavior of the CIN derived in Gerber et al (2000). Absolute calibration of the CIN with a transmissometer operating co-located in a mountain-top cloud shows that the earlier scaling constant for the optical extinction coefficient obtained by other means is within 5% of the absolute calibration value, and that the CIN measurements on the Citation aircraft flights during the CRYSTAL-FACE study are accurate.

  3. Stereotype accuracy of ballet and modern dancers.

    PubMed

    Clabaugh, Alison; Morling, Beth

    2004-02-01

    The authors recorded preprofessional ballet and modern dancers' perceptions of the personality traits of each type of dancer and self-reports of their own standing, to test the accuracy of the group stereotypes. Participants accurately stereotyped ballet dancers as scoring higher than modern dancers on Fear of Negative Evaluation and Personal Need for Structure and accurately viewed the groups as equal on Fitness Esteem. Participants inaccurately stereotyped ballet dancers as lower on Body Esteem; the groups actually scored the same. Sensitivity correlations across traits indicated that dancers were accurate about the relative magnitudes of trait differences in the two types of dancers. A group of nondancers reported stereotypes that were usually in the right direction although of inaccurate magnitude, and nondancers were sensitive to the relative sizes of group differences across traits. PMID:14760963

  4. Quantum mechanical calculations to chemical accuracy

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1991-01-01

    The accuracy of current molecular-structure calculations is illustrated with examples of quantum mechanical solutions for chemical problems. Two approaches are considered: (1) the coupled-cluster singles and doubles (CCSD) with a perturbational estimate of the contribution of connected triple excitations, or CCDS(T); and (2) the multireference configuration-interaction (MRCI) approach to the correlation problem. The MRCI approach gains greater applicability by means of size-extensive modifications such as the averaged-coupled pair functional approach. The examples of solutions to chemical problems include those for C-H bond energies, the vibrational frequencies of O3, identifying the ground state of Al2 and Si2, and the Lewis-Rayleigh afterglow and the Hermann IR system of N2. Accurate molecular-wave functions can be derived from a combination of basis-set saturation studies and full configuration-interaction calculations.

  5. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  6. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  7. Positional Accuracy of Gps Satellite Almanac

    NASA Astrophysics Data System (ADS)

    Ma, Lihua; Zhou, Shangli

    2014-12-01

    How to accelerate signal acquisition and shorten starting time are key problems in the Global Positioning System (GPS). GPS satellite almanac plays an important role in signal reception period. Almanac accuracy directly affects the speed of GPS signal acquisition, the start time of the receiver, and even the system performance to some extent. Combined with precise ephemeris products released by the International GNSS Service (IGS), the authors analyse GPS satellite almanac from the first day to the third day in the 1805th GPS week (from August 11 to 13, 2014 in the Gregorian calendar). The results show that mean of position errors in three-dimensional coordinate system varies from about 1 kilometer to 3 kilometers, which can satisfy the needs of common users.

  8. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  9. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  10. Positioning accuracy of the neurotron 1000

    SciTech Connect

    Cox, R.S.; Murphy, M.J.

    1995-12-31

    The Neuotron 1000 is a novel treatment machine under development for frameless stereotaxic radiosurgery that consists of a compact X-band accelerator mounted on a robotic arm. The therapy beam is guided to the lesion by an imaging system, which included two diagnostic x-ray cameras that view the patient during treatment. Patient position and motion are measured by the imaging system and appropriate corrections are communicated in real time to the robotic arm for beam targeting and motion tracking. The three tests reported here measured the pointing accuracy of the therapy beam and the present capability of the imaging guidance system. The positioning and pointing test measured the ability of the robotic arm to direct the beam through a test isocenter from arbitrary arm positions. The test isocenter was marked by a small light-sensitive crystal and the beam axis was simulated by a laser.

  11. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  12. Improving the accuracy of death certification

    PubMed Central

    Myers, K A; Farquhar, D R

    1998-01-01

    BACKGROUND: Population-based mortality statistics are derived from the information recorded on death certificates. This information is used for many important purposes, such as the development of public health programs and the allocation of health care resources. Although most physicians are confronted with the task of completing death certificates, many do not receive adequate training in this skill. Resulting inaccuracies in information undermine the quality of the data derived from death certificates. METHODS: An educational intervention was designed and implemented to improve internal medicine residents' accuracy in death certificate completion. A total of 229 death certificates (146 completed before and 83 completed after the intervention) were audited for major and minor errors, and the rates of errors before and after the intervention were compared. RESULTS: Major errors were identified on 32.9% of the death certificates completed before the intervention, a rate comparable to previously reported rates for internal medicine services in teaching hospitals. Following the intervention the major error rate decreased to 15.7% (p = 0.01). The reduction in the major error rate was accounted for by significant reductions in the rate of listing of mechanism of death without a legitimate underlying cause of death (15.8% v. 4.8%) (p = 0.01) and the rate of improper sequencing of death certificate information (15.8% v. 6.0%) (p = 0.03). INTERPRETATION: Errors are common in the completion of death certificates in the inpatient teaching hospital setting. The accuracy of death certification can be improved with the implementation of a simple educational intervention. PMID:9614825

  13. Food Label Accuracy of Common Snack Foods

    PubMed Central

    Jumpertz, Reiner; Venti, Colleen A; Le, Duc Son; Michaels, Jennifer; Parrington, Shannon; Krakoff, Jonathan; Votruba, Susanne

    2012-01-01

    Nutrition labels have raised awareness of the energetic value of foods, and represent for many a pivotal guideline to regulate food intake. However, recent data have created doubts on label accuracy. Therefore we tested label accuracy for energy and macronutrient content of prepackaged energy-dense snack food products. We measured “true” caloric content of 24 popular snack food products in the U.S. and determined macronutrient content in 10 selected items. Bomb calorimetry and food factors were used to estimate energy content. Macronutrient content was determined according to Official Methods of Analysis. Calorimetric measurements were performed in our metabolic laboratory between April 20th and May 18th and macronutrient content was measured between September 28th and October 7th of 2010. Serving size, by weight, exceeded label statements by 1.2% [median] (25th percentile −1.4, 75th percentile 4.3, p=0.10). When differences in serving size were accounted for, metabolizable calories were 6.8 kcal (0.5, 23.5, p=0.0003) or 4.3% (0.2, 13.7, p=0.001) higher than the label statement. In a small convenience sample of the tested snack foods, carbohydrate content exceeded label statements by 7.7% (0.8, 16.7, p=0.01); however fat and protein content were not significantly different from label statements (−12.8% [−38.6, 9.6], p=0.23; 6.1% [−6.1, 17.5], p=0.32). Carbohydrate content explained 40% and serving size an additional 55% of the excess calories. Among a convenience sample of energy-dense snack foods, caloric content is higher than stated on the nutrition labels, but overall well within FDA limits. This discrepancy may be explained by inaccurate carbohydrate content and serving size. PMID:23505182

  14. Meteor orbit determination with improved accuracy

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria

    2015-08-01

    Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217

  15. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  16. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    SciTech Connect

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi; Liu, Jie; Lan, Ke

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage error of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.

  17. 40 CFR 86.1338-2007 - Emission measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... concentration in the calibration curve for which an accuracy of ±2 percent of point has been demonstrated as... measurement must be made to ensure the accuracy of the calibration curve to within ±2 percent of...

  18. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Accuracy of advertising. 740.2 Section 740.2 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes...

  19. [Accuracy of a pulse oximeter during hypoxia].

    PubMed

    Tachibana, C; Fukada, T; Hasegawa, R; Satoh, K; Furuya, Y; Ohe, Y

    1996-04-01

    The accuracy of the pulse oximeter was examined in hypoxic patients. We studied 11 cyanotic congenital heart disease patients during surgery, and compared the arterial oxygen saturation determined by both the simultaneous blood gas analysis (CIBA-CORNING 288 BLOOD GAS SYSTEM, SaO2) and by the pulse oximeter (DATEX SATELITE, with finger probe, SpO2). Ninty sets of data on SpO2 and SaO2 were obtained. The bias (SpO2-SaO2) was 1.7 +/- 6.9 (mean +/- SD) %. In cyanotic congenital heart disease patients, SpO2 values were significantly higher than SaO2. Although the reason is unknown, in constantly hypoxic patients, SpO2 values are possibly over-estimated. In particular, pulse oximetry at low levels of saturation (SaO2 below 80%) was not as accurate as at a higher saturation level (SaO2 over 80%). There was a positive correlation between SpO2 and SaO2 (linear regression analysis yields the equation y = 0.68x + 26.0, r = 0.93). In conclusion, the pulse oximeter is useful to monitor oxygen saturation in constantly hypoxic patients, but the values thus obtained should be compared with the values measured directly when hypoxemia is severe.

  20. Accuracy of bottled drinking water label content.

    PubMed

    Khan, Nazeer B; Chohan, Arham N

    2010-07-01

    The purpose of the study was to compare the accuracy of the concentration of fluoride (F), calcium (Ca), pH, and total dissolved solids (TDS) levels mentioned on the labels of the various brands of bottled drinking water available in Riyadh, Saudi Arabia. Twenty-one different brands of locally produced non-carbonated (still water) bottled drinking water were collected from the supermarkets of Riyadh. The concentration of F, Ca, TDS, and pH values were noted from the labels of the bottles. The samples were analyzed for concentrations in the laboratory using the atomic absorption spectrophotometer. The mean level of F, Ca, and pH were found as 0.86 ppm, 38.47 ppm, and 7.5, respectively, which were significantly higher than the mean concentration of these elements reported in the labels. Whereas, the mean TDS concentration was found 118.87 ppm, which was significantly lower than the mean reported on the labels. In tropical countries like Saudi Arabia, the appropriate level of F concentration in drinking water as recommended by World Health Organization (WHO) should be 0.6-0.7 ppm. Since the level of F was found to be significantly higher than the WHO recommended level, the children exposed to this level could develop objectionable fluorosis. The other findings, like pH value, concentrations of Ca, and TDS, were in the range recommended by the WHO and Saudi standard limits and therefore should have no obvious significant health implications.

  1. Accuracies of diagnostic methods for acute appendicitis.

    PubMed

    Park, Jong Seob; Jeong, Jin Ho; Lee, Jong In; Lee, Jong Hoon; Park, Jea Kun; Moon, Hyoun Jong

    2013-01-01

    The objectives were to evaluate the effectiveness of ultrasonography, computed tomography, and physical examination for diagnosing acute appendicitis with analyzing their accuracies and negative appendectomy rates in a clinical rather than research setting. A total of 2763 subjects were enrolled. Sensitivity, specificity, positive predictive value, and negative predictive value and negative appendectomy rate for ultrasonography, computed tomography, and physical examination were calculated. Confirmed positive acute appendicitis was defined based on pathologic findings, and confirmed negative acute appendicitis was defined by pathologic findings as well as on clinical follow-up. Sensitivity, specificity, positive predictive value, and negative predictive value for ultrasonography were 99.1, 91.7, 96.5, and 97.7 per cent, respectively; for computed tomography, 96.4, 95.4, 95.6, and 96.3 per cent, respectively; and for physical examination, 99.0, 76.1, 88.1, and 97.6 per cent, respectively. The negative appendectomy rate was 5.8 per cent (5.2% in the ultrasonography group, 4.3% in the computed tomography group, and 12.2% in the physical examination group). Ultrasonography/computed tomography should be performed routinely for diagnosis of acute appendicitis. However, in view of its advantages, ultrasonography should be performed first. Also, if the result of a physical examination is negative, imaging studies after physical examination can be unnecessary.

  2. High accuracy wall thickness loss monitoring

    NASA Astrophysics Data System (ADS)

    Gajdacsi, Attila; Cegla, Frederic

    2014-02-01

    Ultrasonic inspection of wall thickness in pipes is a standard technique applied widely in the petrochemical industry. The potential precision of repeat measurements with permanently installed ultrasonic sensors however significantly surpasses that of handheld sensors as uncertainties associated with coupling fluids and positional offsets are eliminated. With permanently installed sensors the precise evaluation of very small wall loss rates becomes feasible in a matter of hours. The improved accuracy and speed of wall loss rate measurements can be used to evaluate and develop more effective mitigation strategies. This paper presents an overview of factors causing variability in the ultrasonic measurements which are then systematically addressed and an experimental setup with the best achievable stability based on these considerations is presented. In the experimental setup galvanic corrosion is used to induce predictable and very small wall thickness loss. Furthermore, it is shown that the experimental measurements can be used to assess the effect of reduced wall loss that is produced by the injection of corrosion inhibitor. The measurements show an estimated standard deviation of about 20nm, which in turn allows us to evaluate the effect and behaviour of corrosion inhibitors within less than an hour.

  3. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  4. Time and position accuracy using codeless GPS

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Jefferson, D. C.; Lichten, S. M.; Thomas, J. B.; Vigue, Y.; Young, L. E.

    1994-01-01

    The Global Positioning System has allowed scientists and engineers to make measurements having accuracy far beyond the original 15 meter goal of the system. Using global networks of P-Code capable receivers and extensive post-processing, geodesists have achieved baseline precision of a few parts per billion, and clock offsets have been measured at the nanosecond level over intercontinental distances. A cloud hangs over this picture, however. The Department of Defense plans to encrypt the P-Code (called Anti-Spoofing, or AS) in the fall of 1993. After this event, geodetic and time measurements will have to be made using codeless GPS receivers. However, there appears to be a silver lining to the cloud. In response to the anticipated encryption of the P-Code, the geodetic and GPS receiver community has developed some remarkably effective means of coping with AS without classified information. We will discuss various codeless techniques currently available and the data noise resulting from each. We will review some geodetic results obtained using only codeless data, and discuss the implications for time measurements. Finally, we will present the status of GPS research at JPL in relation to codeless clock measurements.

  5. High accuracy in situ radiometric mapping.

    PubMed

    Tyler, Andrew N

    2004-01-01

    In situ and airborne gamma ray spectrometry have been shown to provide rapid and spatially representative estimates of environmental radioactivity across a range of landscapes. However, one of the principal limitations of this technique has been the influence of changes in the vertical distribution of the source (e.g. 137Cs) on the observed photon fluence resulting in a significant reduction in the accuracy of the in situ activity measurement. A flexible approach for single gamma photon emitting radionuclides is presented, which relies on the quantification of forward scattering (or valley region between the full energy peak and Compton edge) within the gamma ray spectrum to compensate for changes in the 137Cs vertical activity distribution. This novel in situ method lends itself to the mapping of activity concentrations in environments that exhibit systematic changes in the vertical activity distribution. The robustness of this approach has been demonstrated in a salt marsh environment on the Solway coast, SW Scotland, with both a 7.6 cm x 7.6 cm NaI(Tl) detector and a 35% n-type HPGe detector. Application to ploughed field environments has also been demonstrated using HPGe detector, including its application to the estimation of field moist bulk density and soil erosion measurement. Ongoing research work is also outlined.

  6. Surface accuracy analysis of large deployable antennas

    NASA Astrophysics Data System (ADS)

    Tang, Yaqiong; Li, Tuanjie; Wang, Zuowei; Deng, Hanqing

    2014-11-01

    This paper performs an analysis to the systematic surface figure error influenced by three factors including errors of faceted paraboloids, fabrication imperfection and random thermal strains in orbit. Firstly, the computational formulas for root-mean-square surface deviations caused by these factors are presented respectively. The stochastic finite element method is applied to derive the computational formulas of fabrication imperfection and random thermal strains, by which the sensitivity of surface accuracy to component imperfection can be revealed. Then the Monte Carlo simulation method is introduced to obtain the surface figure by sampling test on random errors. Finally, the analytical method is applied to the research on the surface figure error of AstroMesh deployable reflector. The results show that the deviations between the root-mean-square surface errors calculated by the proposed formulas with less consuming time and those by the Monte Carlo simulation method are less than 2%, which indicates that the proposed method is efficient and receivable enough to analyze systematic surface figure error of a large deployable antenna. Moreover, further investigations on the relationship between surface RMS deviation and the antenna parameters including aperture and the number of subdivisions are presented in the end.

  7. The accuracy of a voice vote

    PubMed Central

    Titze, Ingo R.; Palaparthi, Anil

    2014-01-01

    The accuracy of a voice vote was addressed by systematically varying group size, individual voter loudness, and words that are typically used to express agreement or disagreement. Five judges rated the loudness of two competing groups in A-B comparison tasks. Acoustic analysis was performed to determine the sound energy level of each word uttered by each group. Results showed that individual voter differences in energy level can grossly alter group loudness and bias the vote. Unless some control is imposed on the sound level of individual voters, it is difficult to establish even a two-thirds majority, much less a simple majority. There is no symmetry in the bias created by unequal sound production of individuals. Soft voices do not bias the group loudness much, but loud voices do. The phonetic balance of the two words chosen (e.g., “yea” and “nay” as opposed to “aye” and “no”) seems to be less of an issue. PMID:24437776

  8. EGM improves speed, accuracy in gas measurement

    SciTech Connect

    Sqyres, M.

    1995-07-01

    The natural gas industry`s adoption of electronic gas measurement (EGM) as a way to increase speed and accuracy in obtaining measurement data also has created a need for an electronic data management system. These systems, if not properly designed and implemented, can potentially render the entire process useless. Therefore, it is essential that the system add functionality that complements the power of the hardware. With proper implementation, such a system will not only facilitate operations in today`s fast-paced, post FERC 636 environment, but also will establish a foundation for meeting tomorrow`s measurement challenges. An effective EGM data editing software package can provide a suite of tools to provide accurate, timely data processing. This can be done in a structured, feature-rich, well-designed environment using a user-friendly, graphical user interface (GUI). The program can include functions to perform the following tasks: import data; recognize, review, and correct anomalies; report; export; and provide advanced ad hoc query capabilities. Other considerations can include the developer`s commitment resources, and long-term strategy, vis-a-vis EGM, as well as the industry`s overall acceptance of the package.

  9. Dimensional accuracy of thermoformed polymethyl methacrylate.

    PubMed

    Jagger, R G

    1996-12-01

    Thermoforming of polymethyl methacrylate sheet is used to produce a number of different types of dental appliances. The purpose of this study was to determine the dimensional accuracy of thermoformed polymethyl methacrylate specimens. Five blanks of the acrylic resin were thermoformed on stone casts prepared from a silicone mold of a brass master die. The distances between index marks were measured both on the cast and on the thermoformed blanks with an optical comparator. Measurements on the blanks were made again 24 hours after processing and then 1 week, 1 month, and 3 months after immersion in water. Linear shrinkage of less than 1% (range 0.37% to 0.52%) was observed 24 hours after removal of the blanks from the cast. Immersion of the thermoformed specimens in water resulted in an increase in measured dimensions, but after 3 months' immersion these increases were still less than those of the cast (range 0.07% to 0.18%). It was concluded that it is possible to thermoform Perspex polymethyl methacrylate accurately.

  10. The Good Judge of Personality: Characteristics, Behaviors, and Observer Accuracy

    PubMed Central

    Letzring, Tera D.

    2008-01-01

    Personality characteristics and behaviors related to judgmental accuracy following unstructured interactions among previously unacquainted triads were examined. Judgmental accuracy was related to social skill, agreeableness, and adjustment. Accuracy of observers of the interactions was positively related to the number of good judges in the interaction, which implies that the personality and behaviors of the judge are important for creating a situation in which targets will reveal relevant personality cues. Furthermore, the finding that observer accuracy was positively related to the number of good judge partners suggests that judgmental accuracy is based on more than detection and utilization skills of the judge. PMID:19649134

  11. Ultrahigh accuracy imaging modality for super-localization microscopy.

    PubMed

    Chao, Jerry; Ram, Sripad; Ward, E Sally; Ober, Raimund J

    2013-04-01

    Super-localization microscopy encompasses techniques that depend on the accurate localization of individual molecules from generally low-light images. The obtainable localization accuracies, however, are ultimately limited by the image detector's pixelation and noise. We present the ultrahigh accuracy imaging modality (UAIM), which allows users to obtain accuracies approaching the accuracy that is achievable only in the absence of detector pixelation and noise, and which we found can experimentally provide a >200% accuracy improvement over conventional low-light imaging. PMID:23455923

  12. Online Medical Device Use Prediction: Assessment of Accuracy.

    PubMed

    Maktabi, Marianne; Neumuth, Thomas

    2016-01-01

    Cost-intensive units in the hospital such as the operating room require effective resource management to improve surgical workflow and patient care. To maximize efficiency, online management systems should accurately forecast the use of technical resources (medical instruments and devices). We compare several surgical activities like using the coagulator based on spectral analysis and application of a linear time variant system to obtain future technical resource usage. In our study we examine the influence of the duration of usage and total usage rate of the technical equipment to the prediction performance in several time intervals. A cross validation was conducted with sixty-two neck dissections to evaluate the prediction performance. The performance of a use-state-forecast does not change whether duration is considered or not, but decreases with lower total usage rates of the observed instruments. A minimum number of surgical workflow recordings (here: 62) and >5 minute time intervals for use-state forecast are required for applying our described method to surgical practice. The work presented here might support the reduction of resource conflicts when resources are shared among different operating rooms. PMID:27577445

  13. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  14. Kinematics of a striking task: accuracy and speed-accuracy considerations.

    PubMed

    Parrington, Lucy; Ball, Kevin; MacMahon, Clare

    2015-01-01

    Handballing in Australian football (AF) is the most efficient passing method, yet little research exists examining technical factors associated with accuracy. This study had three aims: (a) To explore the kinematic differences between accurate and inaccurate handballers, (b) to compare within-individual successful (hit target) and unsuccessful (missed target) handballs and (c) to assess handballing when both accuracy and speed of ball-travel were combined using a novel approach utilising canonical correlation analysis. Three-dimensional data were collected on 18 elite AF players who performed handballs towards a target. More accurate handballers exhibited a significantly straighter hand-path, slower elbow angular velocity and smaller elbow range of motion (ROM) compared to the inaccurate group. Successful handballs displayed significantly larger trunk ROM, maximum trunk rotation velocity and step-angle and smaller elbow ROM in comparison to the unsuccessful handballs. The canonical model explained 73% of variance shared between the variable sets, with a significant relationship found between hand-path, elbow ROM and maximum elbow angular velocity (predictors) and hand-speed and accuracy (dependant variables). Interestingly, not all parameters were the same across each of the analyses, with technical differences between inaccurate and accurate handballers different from those between successful and unsuccessful handballs in the within-individual analysis. PMID:25079111

  15. Navigation in Orthognathic Surgery: 3D Accuracy.

    PubMed

    Badiali, Giovanni; Roncari, Andrea; Bianchi, Alberto; Taddei, Fulvia; Marchetti, Claudio; Schileo, Enrico

    2015-10-01

    This article aims to determine the absolute accuracy of maxillary repositioning during orthognathic surgery according to simulation-guided navigation, that is, the combination of navigation and three-dimensional (3D) virtual surgery. We retrospectively studied 15 patients treated for asymmetric dentofacial deformities at the Oral and Maxillofacial Surgery Unit of the S.Orsola-Malpighi University Hospital in Bologna, Italy, from January 2010 to January 2012. Patients were scanned with a cone-beam computed tomography before and after surgery. The virtual surgical simulation was realized with a dedicated software and loaded on a navigation system to improve intraoperative reproducibility of the preoperative planning. We analyzed the outcome following two protocols: (1) planning versus postoperative 3D surface analysis; (2) planning versus postoperative point-based analysis. For 3D surface comparison, the mean Hausdorff distance was measured, and median among cases was 0.99 mm. Median reproducibility < 1 mm was 61.88% and median reproducibility < 2 mm was 85.46%. For the point-based analysis, with sign, the median distance was 0.75 mm in the frontal axis, -0.05 mm in the caudal-cranial axis, -0.35 mm in the lateral axis. In absolute value, the median distance was 1.19 mm in the frontal axis, 0.59 mm in the caudal-cranial axis, and 1.02 mm in the lateral axis. We suggest that simulation-guided navigation makes accurate postoperative outcomes possible for maxillary repositioning in orthognathic surgery, if compared with the surgical computer-designed project realized with a dedicated software, particularly for the vertical dimension, which is the most challenging to manage.

  16. Effect of atmospherics on beamforming accuracy

    NASA Technical Reports Server (NTRS)

    Alexander, Richard M.

    1990-01-01

    Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as

  17. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  18. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  19. Multisensor Arrays for Greater Reliability and Accuracy

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff

    2004-01-01

    Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that

  20. Resist development modeling for OPC accuracy improvement

    NASA Astrophysics Data System (ADS)

    Fan, Yongfa; Zavyalova, Lena; Zhang, Yunqiang; Zhang, Charlie; Lucas, Kevin; Falch, Brad; Croffie, Ebo; Li, Jianliang; Melvin, Lawrence; Ward, Brian

    2009-03-01

    in the same way that current model calibration is done. The method is validated with a rigorous lithography process simulation tool which is based on physical models to simulate and predict effects during the resist PEB and development process. Furthermore, an experimental lithographic process was modeled using this new methodology, showing significant improvement in modeling accuracy in compassion to a traditional model. Layout correction test has shown that the new model form is equivalent to traditional model forms in terms of correction convergence and speed.

  1. Accuracy estimation of foamy virus genome copying

    PubMed Central

    Gärtner, Kathleen; Wiktorowicz, Tatiana; Park, Jeonghae; Mergia, Ayalew; Rethwilm, Axel; Scheller, Carsten

    2009-01-01

    Background Foamy viruses (FVs) are the most genetically stable viruses of the retrovirus family. This is in contrast to the in vitro error rate found for recombinant FV reverse transcriptase (RT). To investigate the accuracy of FV genome copying in vivo we analyzed the occurrence of mutations in HEK 293T cell culture after a single round of reverse transcription using a replication-deficient vector system. Furthermore, the frequency of FV recombination by template switching (TS) and the cross-packaging ability of different FV strains were analyzed. Results We initially sequenced 90,000 nucleotides and detected 39 mutations, corresponding to an in vivo error rate of approximately 4 × 10-4 per site per replication cycle. Surprisingly, all mutations were transitions from G to A, suggesting that APOBEC3 activity is the driving force for the majority of mutations detected in our experimental system. In line with this, we detected a late but significant APOBEC3G and 3F mRNA by quantitative PCR in the cells. We then analyzed 170,000 additional nucleotides from experiments in which we co-transfected the APOBEC3-interfering foamy viral bet gene and observed a significant 50% drop in G to A mutations, indicating that APOBEC activity indeed contributes substantially to the foamy viral replication error rate in vivo. However, even in the presence of Bet, 35 out of 37 substitutions were G to A, suggesting that residual APOBEC activity accounted for most of the observed mutations. If we subtract these APOBEC-like mutations from the total number of mutations, we calculate a maximal intrinsic in vivo error rate of 1.1 × 10-5 per site per replication. In addition to the point mutations, we detected one 49 bp deletion within the analyzed 260000 nucleotides. Analysis of the recombination frequency of FV vector genomes revealed a 27% probability for a template switching (TS) event within a 1 kilobase (kb) region. This corresponds to a 98% probability that FVs undergo at least one

  2. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  3. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  4. Wavelength Calibration Accuracy for the STIS CCD and MAMA Modes

    NASA Astrophysics Data System (ADS)

    Pascucci, Ilaria; Hodge, Phil; Proffitt, Charles R.; Ayres, T.

    2011-03-01

    Two calibration programs were carried out to determine the accuracy of the wavelength solutions for the most used STIS CCD and MAMA modes after Servicing Mission 4. We report here on the analysis of this dataset and show that the STIS wavelength solution has not changed after SM4. We also show that a typical accuracy for the absolute wavelength zero-points is 0.1 pixels while the relative wavelength accuracy is 0.2 pixels.

  5. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  6. Sound source localization identification accuracy: Level and duration dependencies.

    PubMed

    Yost, William A

    2016-07-01

    Sound source localization accuracy for noises was measured for sources in the front azimuthal open field mainly as a function of overall noise level and duration. An identification procedure was used in which listeners identify which loudspeakers presented a sound. Noises were filtered and differed in bandwidth and center frequency. Sound source localization accuracy depended on the bandwidth of the stimuli, and for the narrow bandwidths, accuracy depended on the filter's center frequency. Sound source localization accuracy did not depend on overall level or duration. PMID:27475204

  7. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  8. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy.

    PubMed

    Schaufele, Fred

    2014-03-15

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal.

  9. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  10. Modeling Individual Differences in Response Time and Accuracy in Numeracy

    PubMed Central

    Ratcliff, Roger; Thompson, Clarissa A.; McKoon, Gail

    2015-01-01

    In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/ accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy

  11. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  12. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  13. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  14. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  15. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  16. Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking

    ERIC Educational Resources Information Center

    Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas

    2012-01-01

    This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…

  17. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  18. EFFECTS OF LANDSCAPE CHARACTERISTICS ON LAND-COVER CLASS ACCURACY

    EPA Science Inventory



    Utilizing land-cover data gathered as part of the National Land-Cover Data (NLCD) set accuracy assessment, several logistic regression models were formulated to analyze the effects of patch size and land-cover heterogeneity on classification accuracy. Specific land-cover ...

  19. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  20. Parenting and Adolescents' Accuracy in Perceiving Parental Values.

    ERIC Educational Resources Information Center

    Knafo, Ariel; Schwartz, Shalom H.

    2003-01-01

    Examined potential predictors of Israeli adolescents' accuracy in perceiving parental values. Found that accuracy in perceiving parents' overall value system correlated positively with parents' actual and perceived value agreement and perceived parental warmth and responsiveness, but negatively with perceived value conflict, indifferent parenting,…

  1. 10 CFR 55.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 55.9 Section 55.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) OPERATORS' LICENSES General Provisions § 55.9 Completeness and accuracy of information. Information provided to the Commission by an applicant for a...

  2. 10 CFR 55.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 55.9 Section 55.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) OPERATORS' LICENSES General Provisions § 55.9 Completeness and accuracy of information. Information provided to the Commission by an applicant for a...

  3. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Emission measurement accuracy. 92.127... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... for calibrating the CO2 analyzer) with a concentration between the two lowest non-zero gas...

  4. 29 CFR 502.7 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Accuracy of information, statements, data. 502.7 Section 502.7 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... Accuracy of information, statements, data. Information, statements and data submitted in compliance...

  5. 29 CFR 501.8 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Accuracy of information, statements, data. 501.8 Section... SECTION 218 OF THE IMMIGRATION AND NATIONALITY ACT General Provisions § 501.8 Accuracy of information, statements, data. Information, statements and data submitted in compliance with 8 U.S.C. 1188 or...

  6. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  7. Prediction of Rate Constants for Catalytic Reactions with Chemical Accuracy.

    PubMed

    Catlow, C Richard A

    2016-08-01

    Ex machina: A computational method for predicting rate constants for reactions within microporous zeolite catalysts with chemical accuracy has recently been reported. A key feature of this method is a stepwise QM/MM approach that allows accuracy to be achieved while using realistic models with accessible computer resources.

  8. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  9. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  10. ACCURACY OF INTERPERSONAL PERCEPTION--A FUNCTION OF SUPERORDINATE ROLE.

    ERIC Educational Resources Information Center

    BRUMBAUGH, ROBERT B.

    ONE ASPECT OF THE PERCEPTUAL ACCURACY OF STUDENT TEACHERS AND THEIR SUPERVISORS IN JUDGING THEIR INTERPERSONAL RELATIONS WAS EXPLORED. A FIELD STUDY OF 40 STUDENT TEACHERS AND THEIR PUBLIC SCHOOL SUPERVISING TEACHERS EXPLORED THE POSSIBILITY OF SUBORDINATE ROLE BEING A CORRELATE TO THE ACCURACY OF THEIR INTERPERSONAL PERCEPTION. AT THE END OF 6…

  11. A Probability Model of Accuracy in Deception Detection Experiments.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Levine, Timothy R.

    2001-01-01

    Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…

  12. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  13. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  14. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  15. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  16. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  17. 41 CFR 51-9.101-2 - Standards of accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Standards of accuracy. 51-9.101-2 Section 51-9.101-2 Public Contracts and Property Management Other Provisions Relating to... RULES 9.1-General Policy § 51-9.101-2 Standards of accuracy. The Executive Director shall ensure...

  18. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  19. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  20. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  1. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  2. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  3. Dissociating Appraisals of Accuracy and Recollection in Autobiographical Remembering

    ERIC Educational Resources Information Center

    Scoboria, Alan; Pascal, Lisa

    2016-01-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally…

  4. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  5. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  6. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  7. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  8. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  9. Accuracy in Detecting Truths and Lies: Documenting the "Veracity Effect."

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Park, Hee Sun; McCornack, Steven A.

    1999-01-01

    Conducts four studies on detecting truth and lies. Suggest that the single best predictor of detection accuracy may be the veracity of message being judged. Finds that truths are judged with substantially greater accuracy than lies. Findings suggest that there is a need for reassessment of many commonly held conclusions about deceptive…

  10. Interpretive Accuracy of Two MMPI Short Forms with Geriatric Patients.

    ERIC Educational Resources Information Center

    Newmark, Charles S.; And Others

    1982-01-01

    Assessed and compared the interpretive accuracy of the standard Minnesota Multiphasic Personality Inventory (MMPI) and two MMPI short forms with a sample of geriatric psychiatric inpatients. Psychiatric teams evaluated the accuracy of the interpretation. Standard form interpretations were rated significantly greater than the interpretations…

  11. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  12. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  13. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  14. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  15. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  16. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  17. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  18. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  19. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  20. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  1. High accuracy absolute laser powermeter calibrated over the whole range

    SciTech Connect

    Miron, N.; Korony, G.; Velculescu, V.G.

    1994-12-31

    The main contribution to this laser powermeter is the capability of its detector to be electrically calibrated over the whole measuring range (0 ... 100W), with an accuracy better than 1%. This allows an improved accuracy in determining the second-order polynomial coefficients describing thermocouple electric response.

  2. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  3. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  4. 40 CFR 89.310 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Test Equipment Provisions § 89.310 Analyzer accuracy and specifications. (a) Measurement accuracy... is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a given calibration or span gas. (3) Noise. The analyzer peak-to-peak response to zero and calibration or span gases...

  5. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  6. Accuracy of thick-target micro-PIXE analysis

    NASA Astrophysics Data System (ADS)

    Campbell, J. L.; Teesdale, W. J.; Wang, J.-X.

    1990-04-01

    The accuracy attainable in micro-PIXE analysis is assessed in terms of the X-ray production model and its assumptions, physical realities of the specimen, the necessary data base, and techniques of standardization. NTIS reference materials are analyzed to provide the experimental tests of accuracy.

  7. The Accuracy of Self-Reported High School Grades

    ERIC Educational Resources Information Center

    Jung, Steven M.; Moore, James C.

    1970-01-01

    In a study to investigate accuracy of self-reported grades, length of time between testing and high school graduation was apparently the reason for a significant loss in accuracy in recalling grade reports of highschool graduates and college applicants who had been out of school for more than one year. (IR)

  8. Contemporary flow meters: an assessment of their accuracy and reliability.

    PubMed

    Christmas, T J; Chapple, C R; Rickards, D; Milroy, E J; Turner-Warwick, R T

    1989-05-01

    The accuracy, reliability and cost effectiveness of 5 currently marketed flow meters have been assessed. The mechanics of each meter is briefly described in relation to its accuracy and robustness. The merits and faults of the meters are discussed and the important features of flow measurements that need to be taken into account when making diagnostic interpretations are emphasised.

  9. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  10. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  11. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  12. Accuracy testing of electric groundwater-level measurement tapes

    USGS Publications Warehouse

    Jelinski, Jim; Clayton, Christopher S.; Fulford, Janice M.

    2015-01-01

    The accuracy tests demonstrated that none of the electric-tape models tested consistently met the suggested USGS accuracy of ±0.01 ft. The test data show that the tape models in the study should give a water-level measurement that is accurate to roughly ±0.05 ft per 100 ft without additional calibration. To meet USGS accuracy guidelines, the electric-tape models tested will need to be individually calibrated. Specific conductance also plays a part in tape accuracy. The probes will not work in water with specific conductance values near zero, and the accuracy of one probe was unreliable in very high conductivity water (10,000 microsiemens per centimeter).

  13. Recognition accuracy by experienced men and women players of basketball.

    PubMed

    Millslagle, Duane G

    2002-08-01

    This study examined 30 experienced basketball players' recognition accuracy by sex, playing position (guard, forward, and center), and situations in the game of basketball. The study used a perceptual cognitive paradigm in which subjects viewed slides of structured and unstructured game situations and accurately recognized the presence or absence of the basketball. A significant difference in recognition accuracy by sex, players' position, and structure of the game situation was found. Male players' recognition accuracy was better than the female players'. The recognition accuracy of subjects who played guard was better than that of subjects who played forward or center. The players' recognition accuracy was more accurate when observing structured plays versus unstructured plays. The conclusion of this study suggested that experienced basketball players differ in their cognitive and visual searching processes by sex and player position within the sport of basketball.

  14. Accuracy of endoscopic ultrasonography for diagnosing ulcerative early gastric cancers.

    PubMed

    Park, Jin-Seok; Kim, Hyungkil; Bang, Byongwook; Kwon, Kyesook; Shin, Youngwoon

    2016-07-01

    Although endoscopic ultrasonography (EUS) is the first-choice imaging modality for predicting the invasion depth of early gastric cancer (EGC), the prediction accuracy of EUS is significantly decreased when EGC is combined with ulceration.The aim of present study was to compare the accuracy of EUS and conventional endoscopy (CE) for determining the depth of EGC. In addition, the various clinic-pathologic factors affecting the diagnostic accuracy of EUS, with a particular focus on endoscopic ulcer shapes, were evaluated.We retrospectively reviewed data from 236 consecutive patients with ulcerative EGC. All patients underwent EUS for estimating tumor invasion depth, followed by either curative surgery or endoscopic treatment. The diagnostic accuracy of EUS and CE was evaluated by comparing the final histologic result of resected specimen. The correlation between accuracy of EUS and characteristics of EGC (tumor size, histology, location in stomach, tumor invasion depth, and endoscopic ulcer shapes) was analyzed. Endoscopic ulcer shapes were classified into 3 groups: definite ulcer, superficial ulcer, and ill-defined ulcer.The overall accuracy of EUS and CE for predicting the invasion depth in ulcerative EGC was 68.6% and 55.5%, respectively. Of the 236 patients, 36 patients were classified as definite ulcers, 98 were superficial ulcers, and 102 were ill-defined ulcers, In univariate analysis, EUS accuracy was associated with invasion depth (P = 0.023), tumor size (P = 0.034), and endoscopic ulcer shapes (P = 0.001). In multivariate analysis, there is a significant association between superficial ulcer in CE and EUS accuracy (odds ratio: 2.977; 95% confidence interval: 1.255-7.064; P = 0.013).The accuracy of EUS for determining tumor invasion depth in ulcerative EGC was superior to that of CE. In addition, ulcer shape was an important factor that affected EUS accuracy. PMID:27472672

  15. Geometric accuracy of three-dimensional molecular overlays.

    PubMed

    Chen, Qi; Higgs, Richard E; Vieth, Michal

    2006-01-01

    This study examines the dependence of molecular alignment accuracy on a variety of factors including the choice of molecular template, alignment method, conformational flexibility, and type of protein target. We used eight test systems for which X-ray data on 145 ligand-protein complexes were available. The use of X-ray structures allowed an unambiguous assignment of bioactive overlays for each compound set. The alignment accuracy depended on multiple factors and ranged from 6% for flexible overlays to 73% for X-ray rigid overlays, when the conformation of the template ligand came from X-ray structures. The dependence of the overlay accuracy on the choice of templates and molecules to be aligned was found to be the most significant factor in six and seven of the eight ligand-protein complex data sets, respectively. While finding little preference for the overlay method, we observed that the introduction of molecule flexibility resulted in a decrease of overlay accuracy in 50% of the cases. We derived rules to maximize the accuracy of alignment, leading to a more than 2-fold improvement in accuracy (from 19% to 48%). The rules also allowed the identification of compounds with a low (<5%) chance to be correctly aligned. Last, the accuracy of the alignment derived without any utilization of X-ray conformers varied from <1% for the human immunodeficiency virus data set to 53% for the trypsin data set. We found that the accuracy was directly proportional to the product of the overlay accuracy from the templates in their bioactive conformations and the chance of obtaining the correct bioactive conformation of the templates. This study generates a much needed benchmark for the expectations of molecular alignment accuracy and shows appropriate usages and best practices to maximize hypothesis generation success.

  16. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  17. [Ovarian tumours--accuracy of frozen section diagnosis].

    PubMed

    Ivanov, S; Ivanov, S; Khadzhiolov, N

    2005-01-01

    A retrospective study of 450 ovarian biopsy results were examined for the period of 1998 till 2004 to evaluate the accuracy of frozen section diagnosis. In addition to this we performed a review of the literature for all previous studies in this field in order to study the accuracy rates of the different clinics throughout the world. The histhopathological results of the frozen section diagnosis were equal with the diagnosis of the paraffin blocks in 90%. The sensitivity rates for benign, malignant and borderline tumours, were 96%, 84% and 60% respectively. We had 10 patients (2,1%) false-positive results (overdiagnosed) and 26 (5,2%) false-negative results (underdiagnosed) in frozen section examinations. Frozen section examination of mucinous tumours showed hogher underdiagnosis--18%. The review of the literature showed that there is no significant difference in accuracy rates of frozen section diagnosis for benign and malignant ovarian tumours in relation with time. We found low accuracy rates for borderline tumours which was similar with most of the foreign publications. However the accuracy of the frozen section diagnosis is bettering with the time. As a result of this we conclude that the accuracy rates of the frozen section diagnosis for evaluation of the malignant and benign tumours is quite enough for correct diagnosis. Since accuracy rates for borderline ovarian tumours are low we have to take care and attention of improvement in this field.

  18. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    PubMed Central

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  19. The efficacy of bedside chest ultrasound: from accuracy to outcomes.

    PubMed

    Hew, Mark; Tay, Tunn Ren

    2016-09-01

    For many respiratory physicians, point-of-care chest ultrasound is now an integral part of clinical practice. The diagnostic accuracy of ultrasound to detect abnormalities of the pleura, the lung parenchyma and the thoracic musculoskeletal system is well described. However, the efficacy of a test extends beyond just diagnostic accuracy. The true value of a test depends on the degree to which diagnostic accuracy efficacy influences decision-making efficacy, and the subsequent extent to which this impacts health outcome efficacy. We therefore reviewed the demonstrable levels of test efficacy for bedside ultrasound of the pleura, lung parenchyma and thoracic musculoskeletal system.For bedside ultrasound of the pleura, there is evidence supporting diagnostic accuracy efficacy, decision-making efficacy and health outcome efficacy, predominantly in guiding pleural interventions. For the lung parenchyma, chest ultrasound has an impact on diagnostic accuracy and decision-making for patients presenting with acute respiratory failure or breathlessness, but there are no data as yet on actual health outcomes. For ultrasound of the thoracic musculoskeletal system, there is robust evidence only for diagnostic accuracy efficacy.We therefore outline avenues to further validate bedside chest ultrasound beyond diagnostic accuracy, with an emphasis on confirming enhanced health outcomes. PMID:27581823

  20. Dissociating appraisals of accuracy and recollection in autobiographical remembering.

    PubMed

    Scoboria, Alan; Pascal, Lisa

    2016-07-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally distinct, such high correspondence might suggest conceptual redundancy. This article examines whether recollection and accuracy dissociate when studying different types of autobiographical event representations. In Study 1, 278 participants described a believed memory, a nonbelieved memory, and a believed-not-remembered event and rated each on occurrence, recollection, accuracy, and related covariates. In Study 2, 876 individuals described and rated 1 of these events, as well as an event about which they were uncertain about their memory. Confirmatory structural equation modeling indicated that the measurement dissociation between occurrence, recollection and accuracy held across all types of events examined. Relative to believed memories, the relationship between recollection and belief in accuracy was meaningfully lower for the other event types. These findings support the claim that recollection and accuracy arise from distinct underlying mechanisms. (PsycINFO Database Record

  1. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  2. Gesture recognition for smart home applications using portable radar sensors.

    PubMed

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  3. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    PubMed Central

    Yilmaz, Nihat; Inan, Onur

    2013-01-01

    This paper offers a hybrid approach that uses the artificial bee colony (ABC) algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications. PMID:23983632

  4. Gesture recognition for smart home applications using portable radar sensors.

    PubMed

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes. PMID:25571464

  5. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  6. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  7. Accuracy of laser beam center and width calculations.

    PubMed

    Mana, G; Massa, E; Rovera, A

    2001-03-20

    The application of lasers in high-precision measurements and the demand for accuracy make the plane-wave model of laser beams unsatisfactory. Measurements of the variance of the transverse components of the photon impulse are essential for wavelength determination. Accuracy evaluation of the relevant calculations is thus an integral part of the assessment of the wavelength of stabilized-laser radiation. We present a propagation-of-error analysis on variance calculations when digitized intensity profiles are obtained by means of silicon video cameras. Image clipping criteria are obtained that maximize the accuracy of the computed result.

  8. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  9. Integrated zone comparison polygraph technique accuracy with scoring algorithms.

    PubMed

    Gordon, Nathan J; Mohamed, Feroze B; Faro, Scott H; Platek, Steven M; Ahmad, Harris; Williams, J Michael

    2006-02-28

    The Integrated Zone Comparison Technique (IZCT) was utilized with computerized polygraph instrumentation as part of a blind study in the detection of deception. Three scoring algorithms: ASIT Poly Suite (Academy for Scientific Investigative Training's Horizontal Scoring and Algorithm for Chart Interpretation), PolyScore 5.5, and the Objective Scoring System (OSS) were assessed in the interpretation of the charts generated. Where "Inconclusives" were excluded, accuracy for the IZCT with all three algorithms was 100%. When "Inconclusives" were counted as errors, overall accuracy for the IZCT with ASIT Poly Suite was 90% and accuracy with PolyScore and the Objective Scoring System was 72%.

  10. Air traffic control surveillance accuracy and update rate study

    NASA Technical Reports Server (NTRS)

    Craigie, J. H.; Morrison, D. D.; Zipper, I.

    1973-01-01

    The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.

  11. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  12. The H50Q mutation induces a 10-fold decrease in the solubility of α-synuclein.

    PubMed

    Porcari, Riccardo; Proukakis, Christos; Waudby, Christopher A; Bolognesi, Benedetta; Mangione, P Patrizia; Paton, Jack F S; Mullin, Stephen; Cabrita, Lisa D; Penco, Amanda; Relini, Annalisa; Verona, Guglielmo; Vendruscolo, Michele; Stoppini, Monica; Tartaglia, Gian Gaetano; Camilloni, Carlo; Christodoulou, John; Schapira, Anthony H V; Bellotti, Vittorio

    2015-01-23

    The conversion of α-synuclein from its intrinsically disordered monomeric state into the fibrillar cross-β aggregates characteristically present in Lewy bodies is largely unknown. The investigation of α-synuclein variants causative of familial forms of Parkinson disease can provide unique insights into the conditions that promote or inhibit aggregate formation. It has been shown recently that a newly identified pathogenic mutation of α-synuclein, H50Q, aggregates faster than the wild-type. We investigate here its aggregation propensity by using a sequence-based prediction algorithm, NMR chemical shift analysis of secondary structure populations in the monomeric state, and determination of thermodynamic stability of the fibrils. Our data show that the H50Q mutation induces only a small increment in polyproline II structure around the site of the mutation and a slight increase in the overall aggregation propensity. We also find, however, that the H50Q mutation strongly stabilizes α-synuclein fibrils by 5.0 ± 1.0 kJ mol(-1), thus increasing the supersaturation of monomeric α-synuclein within the cell, and strongly favors its aggregation process. We further show that wild-type α-synuclein can decelerate the aggregation kinetics of the H50Q variant in a dose-dependent manner when coaggregating with it. These last findings suggest that the precise balance of α-synuclein synthesized from the wild-type and mutant alleles may influence the natural history and heterogeneous clinical phenotype of Parkinson disease. PMID:25505181

  13. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures

    PubMed Central

    Lyle, Matthew J.; Pickard, Chris J.; Needs, Richard J.

    2015-01-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  14. The H50Q Mutation Induces a 10-fold Decrease in the Solubility of α-Synuclein*

    PubMed Central

    Porcari, Riccardo; Proukakis, Christos; Waudby, Christopher A.; Bolognesi, Benedetta; Mangione, P. Patrizia; Paton, Jack F. S.; Mullin, Stephen; Cabrita, Lisa D.; Penco, Amanda; Relini, Annalisa; Verona, Guglielmo; Vendruscolo, Michele; Stoppini, Monica; Tartaglia, Gian Gaetano; Camilloni, Carlo; Christodoulou, John; Schapira, Anthony H. V.; Bellotti, Vittorio

    2015-01-01

    The conversion of α-synuclein from its intrinsically disordered monomeric state into the fibrillar cross-β aggregates characteristically present in Lewy bodies is largely unknown. The investigation of α-synuclein variants causative of familial forms of Parkinson disease can provide unique insights into the conditions that promote or inhibit aggregate formation. It has been shown recently that a newly identified pathogenic mutation of α-synuclein, H50Q, aggregates faster than the wild-type. We investigate here its aggregation propensity by using a sequence-based prediction algorithm, NMR chemical shift analysis of secondary structure populations in the monomeric state, and determination of thermodynamic stability of the fibrils. Our data show that the H50Q mutation induces only a small increment in polyproline II structure around the site of the mutation and a slight increase in the overall aggregation propensity. We also find, however, that the H50Q mutation strongly stabilizes α-synuclein fibrils by 5.0 ± 1.0 kJ mol−1, thus increasing the supersaturation of monomeric α-synuclein within the cell, and strongly favors its aggregation process. We further show that wild-type α-synuclein can decelerate the aggregation kinetics of the H50Q variant in a dose-dependent manner when coaggregating with it. These last findings suggest that the precise balance of α-synuclein synthesized from the wild-type and mutant alleles may influence the natural history and heterogeneous clinical phenotype of Parkinson disease. PMID:25505181

  15. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures.

    PubMed

    Lyle, Matthew J; Pickard, Chris J; Needs, Richard J

    2015-06-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  16. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  17. Vibrational Spectroscopy of HD{sup +} with 2-ppb Accuracy

    SciTech Connect

    Koelemeij, J. C. J.; Roth, B.; Wicht, A.; Ernsting, I.; Schiller, S.

    2007-04-27

    By measurement of the frequency of a vibrational overtone transition in the molecular hydrogen ion HD{sup +}, we demonstrate the first optical spectroscopy of trapped molecular ions with submegahertz accuracy. We use a diode laser, locked to a stable frequency comb, to perform resonance-enhanced multiphoton dissociation spectroscopy on sympathetically cooled HD{sup +} ions at 50 mK. The achieved 2-ppb relative accuracy is a factor of 150 higher than previous results for HD{sup +}, and the measured transition frequency agrees well with recent high-accuracy ab initio calculations, which include high-order quantum electrodynamic effects. We also show that our method bears potential for achieving considerably higher accuracy and may, if combined with slightly improved theoretical calculations, lead to a new and improved determination of the electron-proton mass ratio.

  18. Atmospheric effects and ultimate ranging accuracy for lunar laser ranging

    NASA Astrophysics Data System (ADS)

    Currie, Douglas G.; Prochazka, Ivan

    2014-10-01

    The deployment of next generation lunar laser retroreflectors is planned in the near future. With proper robotic deployment, these will support single shot single photo-electron ranging accuracy at the 100 micron level or better. There are available technologies for the support at this accuracy by advanced ground stations, however, the major question is the ultimate limit imposed on the ranging accuracy due to the changing timing delays due to turbulence and horizontal gradients in the earth's atmosphere. In particular, there are questions of the delay and temporal broadening of a very narrow laser pulse. Theoretical and experimental results will be discussed that address estimates of the magnitudes of these effects and the issue of precision vs. accuracy.

  19. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  20. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  1. Portable, high intensity isotopic neutron source provides increased experimental accuracy

    NASA Technical Reports Server (NTRS)

    Mohr, W. C.; Stewart, D. C.; Wahlgren, M. A.

    1968-01-01

    Small portable, high intensity isotopic neutron source combines twelve curium-americium beryllium sources. This high intensity of neutrons, with a flux which slowly decreases at a known rate, provides for increased experimental accuracy.

  2. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  3. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  4. Hydraulic servo system increases accuracy in fatigue testing

    NASA Technical Reports Server (NTRS)

    Dixon, G. V.; Kibler, K. S.

    1967-01-01

    Hydraulic servo system increases accuracy in applying fatigue loading to a specimen under test. An error sensing electronic control loop, coupled to the hydraulic proportional closed loop cyclic force generator, provides an accurately controlled peak force to the specimen.

  5. High-Accuracy Asteroid Astrometry from Table Mountain Observatory

    NASA Technical Reports Server (NTRS)

    Owen, W. M.; Synnott, S. P.; Null, G. W.

    1998-01-01

    We have installed a large-format CCD camera on the 0.6 meter telescope at JPL's Table Mountain Observatory and used it to obtain high-accuracy astrometric obserations of asteroids and other solar system targets of interest.

  6. Accuracy testing of a new intraoral 3D camera.

    PubMed

    Mehl, A; Ender, A; Mörmann, W; Attin, T

    2009-01-01

    Surveying intraoral structures by optical means has reached the stage where it is being discussed as a serious clinical alternative to conventional impression taking. Ease of handling and, more importantly, accuracy are important criteria for the clinical suitability of these systems. This article presents a new intraoral camera for the Cerec procedure. It reports on a study investigating the accuracy of this camera and its potential clinical indications. Single-tooth and quadrant images were taken with the camera and the results compared to those obtained with a reference scanner and with the previous 3D camera model. Differences were analyzed by superimposing the data records. Accuracy was higher with the new camera than with the previous model, reaching up to 19 microm in single-tooth images. Quadrant images can also be taken with sufficient accuracy (ca 35 microm) and are simple to perform in clinical practice, thanks to built-in shake detection in automatic capture mode.

  7. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  8. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  9. Noise limitations on monopulse accuracy in a multibeam antenna

    NASA Astrophysics Data System (ADS)

    Loraine, J.; Wallington, J. R.

    A multibeam system allowing target tracking using monopulse processing switched from beamset to beamset is considered. Attention is given to the accuracy of target angular position estimation. An analytical method is used to establish performance limits under low SNR conditions for a multibeam system. It is shown that, in order to achieve accuracies comparable to those of conventional monopulse systems, much higher SNRs are needed.

  10. Pulse oximetry: accuracy of methods of interpreting graphic summaries.

    PubMed

    Lafontaine, V M; Ducharme, F M; Brouillette, R T

    1996-02-01

    Although pulse oximetry has been used to determine the frequency and extent of hemoglobin desaturation during sleep, movement artifact can result in overestimation of desaturation unless valid desaturations can be identified accurately. Therefore, we determined the accuracy of pulmonologists' and technicians' interpretations of graphic displays of desaturation events, derived an objective method for interpreting such events, and validated the method on an independent data set. Eighty-seven randomly selected desaturation events were classified as valid (58) or artifactual (29) based on cardiorespiratory recordings (gold standard) that included pulse waveform and respiratory inductive plethysmography signals. Using oximetry recordings (test method), nine pediatric pulmonologists and three respiratory technicians ("readers") averaged 50 +/- 11% (SD) accuracy for event classification. A single variable, the pulse amplitude modulation range (PAMR) prior to desaturation, performed better in discriminating valid from artifactual events with 76% accuracy (P < 0.05). Following a seminar on oximetry and the use of the PAMR method, the readers' accuracy increased to 73 +/- 2%. In an independent set of 73 apparent desaturation events (74% valid, 26% artifactual), the PAMR method of assessing oximetry graphs yielded 82% accuracy; transcutaneous oxygen tension records confirmed a drop in oxygenation during 49 of 54 (89%) valid desaturation events. In conclusion, the most accurate method (91%) of assessing desaturation events requires recording of the pulse and respiratory waveforms. However, a practical, easy-to-use method of interpreting pulse oximetry recordings achieved 76-82% accuracy, which constitutes a significant improvement from previous subjective interpretations.

  11. Throwing speed and accuracy in baseball and cricket players.

    PubMed

    Freeston, Jonathan; Rooney, Kieron

    2014-06-01

    Throwing speed and accuracy are both critical to sports performance but cannot be optimized simultaneously. This speed-accuracy trade-off (SATO) is evident across a number of throwing groups but remains poorly understood. The goal was to describe the SATO in baseball and cricket players and determine the speed that optimizes accuracy. 20 grade-level baseball and cricket players performed 10 throws at 80% and 100% of maximal throwing speed (MTS) toward a cricket stump. Baseball players then performed a further 10 throws at 70%, 80%, 90%, and 100% of MTS toward a circular target. Baseball players threw faster with greater accuracy than cricket players at both speeds. Both groups demonstrated a significant SATO as vertical error increased with increases in speed; the trade-off was worse for cricketers than baseball players. Accuracy was optimized at 70% of MTS for baseballers. Throwing athletes should decrease speed when accuracy is critical. Cricket players could adopt baseball-training practices to improve throwing performance.

  12. A Stable and Conservative Interface Treatment of Arbitrary Spatial Accuracy

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David

    1998-01-01

    Stable and accurate interface conditions are derived for the linear advection-diffusion equation. The conditions are functionally independent of the spatial order of accuracy and rely only on the form of the discrete operator. We focus on high-order finite-difference operators that satisfy the summation-by-parts (SBP) property. We prove that stability is a natural consequence of the SBP operators used in conjunction with the new boundary conditions. In addition, we show that the interface treatments are conservative. New finite-difference operators of spatial accuracy up to sixth order are constructed: these operators satisfy the SBP property. Finite-difference operators are shown to admit design accuracy (p(sup th)-order global accuracy) when (p - 1)(sup th)-order stencil closures are used near the boundaries if the physical boundary conditions are implemented to at least p(sup th)-order accuracy. Stability and accuracy are demonstrated on the nonlinear Burgers' equation for an twelve-subdomain problem with randomly distributed interfaces.

  13. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    PubMed

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  14. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487

  15. Haptic perception accuracy depending on self-produced movement.

    PubMed

    Park, Chulwook; Kim, Seonjin

    2014-01-01

    This study measured whether self-produced movement influences haptic perception ability (experiment 1) as well as the factors associated with levels of influence (experiment 2) in racket sports. For experiment 1, the haptic perception accuracy levels of five male table tennis experts and five male novices were examined under two different conditions (no movement vs. movement). For experiment 2, the haptic afferent subsystems of five male table tennis experts and five male novices were investigated in only the self-produced movement-coupled condition. Inferential statistics (ANOVA, t-test) and custom-made devices (shock & vibration sensor, Qualisys Track Manager) of the data were used to determine the haptic perception accuracy (experiment 1, experiment 2) and its association with expertise. The results of this research show that expert-level players acquire higher accuracy with less variability (racket vibration and angle) than novice-level players, especially in their self-produced movement coupled performances. The important finding from this result is that, in terms of accuracy, the skill-associated differences were enlarged during self-produced movement. To explain the origin of this difference between experts and novices, the functional variability of haptic afferent subsystems can serve as a reference. These two factors (self-produced accuracy and the variability of haptic features) as investigated in this study would be useful criteria for educators in racket sports and suggest a broader hypothesis for further research into the effects of the haptic accuracy related to variability.

  16. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  17. Collective animal decisions: preference conflict and decision accuracy

    PubMed Central

    Conradt, Larissa

    2013-01-01

    Social animals frequently share decisions that involve uncertainty and conflict. It has been suggested that conflict can enhance decision accuracy. In order to judge the practical relevance of such a suggestion, it is necessary to explore how general such findings are. Using a model, I examine whether conflicts between animals in a group with respect to preferences for avoiding false positives versus avoiding false negatives could, in principle, enhance the accuracy of collective decisions. I found that decision accuracy nearly always peaked when there was maximum conflict in groups in which individuals had different preferences. However, groups with no preferences were usually even more accurate. Furthermore, a relatively slight skew towards more animals with a preference for avoiding false negatives decreased the rate of expected false negatives versus false positives considerably (and vice versa), while resulting in only a small loss of decision accuracy. I conclude that in ecological situations in which decision accuracy is crucial for fitness and survival, animals cannot ‘afford’ preferences with respect to avoiding false positives versus false negatives. When decision accuracy is less crucial, animals might have such preferences. A slight skew in the number of animals with different preferences will result in the group avoiding that type of error more that the majority of group members prefers to avoid. The model also indicated that knowing the average success rate (‘base rate’) of a decision option can be very misleading, and that animals should ignore such base rates unless further information is available. PMID:24516716

  18. Accuracy of GIPSY PPP from a denser network

    NASA Astrophysics Data System (ADS)

    Gokhan Hayal, Adem; Ugur Sanli, Dogan

    2015-04-01

    Researchers need to know about the accuracy of GPS for the planning of their field survey and hence to obtain reliable positions as well as deformation rates. Geophysical applications such as monitoring of development of a fault creep or of crustal motion for global sea level rise studies necessitate the use of continuous GPS whereas applications such as determining co-seismic displacements where permanent GPS sites are sparsely scattered require the employment of episodic campaigns. Recently, real time applications of GPS in relation to the early prediction of earthquakes and tsunamis are in concern. Studying the static positioning accuracy of GPS has been of interest to researchers for more than a decade now. Various software packages and modeling strategies have been tested so far. Relative positioning accuracy was compared with PPP accuracy. For relative positioning, observing session duration and network geometry of reference stations appear to be the dominant factors on GPS accuracy whereas observing session duration seems to be the only factor influencing the PPP accuracy. We believe that latest developments concerning the accuracy of static GPS from well-established software will form a basis for the quality of GPS field works mentioned above especially for real time applications which are referred to more frequently nowadays. To assess the GPS accuracy, conventionally some 10 to 30 regionally or globally scattered networks of GPS stations are used. In this study, we enlarge the size of GPS network up to 70 globally scattered IGS stations to observe the changes on our previous accuracy modeling which employed only 13 stations. We use the latest version 6.3 of GIPSY/OASIS II software and download the data from SOPAC archives. Noting the effect of the ionosphere on our previous accuracy modeling, here we selected the GPS days through which the k-index values are lower than 4. This enabled us to extend the interval of observing session duration used for the

  19. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  20. Speed and accuracy of visual image discrimination by rats.

    PubMed

    Reinagel, Pamela

    2013-01-01

    The trade-off between speed and accuracy of sensory discrimination has most often been studied using sensory stimuli that evolve over time, such as random dot motion discrimination tasks. We previously reported that when rats perform motion discrimination, correct trials have longer reaction times than errors, accuracy increases with reaction time, and reaction time increases with stimulus ambiguity. In such experiments, new sensory information is continually presented, which could partly explain interactions between reaction time and accuracy. The present study shows that a changing physical stimulus is not essential to those findings. Freely behaving rats were trained to discriminate between two static visual images in a self-paced, two-alternative forced-choice reaction time task. Each trial was initiated by the rat, and the two images were presented simultaneously and persisted until the rat responded, with no time limit. Reaction times were longer in correct trials than in error trials, and accuracy increased with reaction time, comparable to results previously reported for rats performing motion discrimination. In the motion task, coherence has been used to vary discrimination difficulty. Here morphs between the previously learned images were used to parametrically vary the image similarity. In randomly interleaved trials, rats took more time on average to respond in trials in which they had to discriminate more similar stimuli. For both the motion and image tasks, the dependence of reaction time on ambiguity is weak, as if rats prioritized speed over accuracy. Therefore we asked whether rats can change the priority of speed and accuracy adaptively in response to a change in reward contingencies. For two rats, the penalty delay was increased from 2 to 6 s. When the penalty was longer, reaction times increased, and accuracy improved. This demonstrates that rats can flexibly adjust their behavioral strategy in response to the cost of errors.

  1. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  2. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  3. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  4. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.

  5. Investigation of the Accuracy of Google Earth Elevation Data

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, Khalid L. A.

    2016-09-01

    Digital Elevation Models (DEMs) comprise valuable source of elevation data required for many engineering applications. Contour lines, slope - aspect maps are part of their many uses. Moreover, DEMs are used often in geographic information systems (GIS), and are the most common basis for digitally-produced relief maps. This paper proposes a method of generating DEM by using Google Earth elevation data which is easier and free. The case study consisted of three different small regions in the northern beach in Egypt. The accuracy of the Google earth derived elevation data are reported using root mean square error (RMSE), mean error (ME) and maximum absolute error (MAE). All these accuracy statistics were computed using the ground coordinates of 200 reference points for each region of the case study. The reference data was collected with total station survey. The results showed that the accuracies for the prepared DEMs are suitable for some certain engineering applications but inadequate to meet the standard required for fine/small scale DEM for very precise engineering study. The obtained accuracies for terrain with small height difference can be used for preparing large area cadastral, city planning, or land classification maps. In general, Google Earth elevation data can be used only for investigation and preliminary studies with low cost. It is strongly concluded that the users of Google Earth have to test the accuracy of elevation data by comparing with reference data before using it.

  6. Martial arts striking hand peak acceleration, accuracy and consistency.

    PubMed

    Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A

    2013-01-01

    The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.

  7. The accuracy of breast volume measurement methods: A systematic review.

    PubMed

    Choppin, S B; Wheat, J S; Gee, M; Goyal, A

    2016-08-01

    Breast volume is a key metric in breast surgery and there are a number of different methods which measure it. However, a lack of knowledge regarding a method's accuracy and comparability has made it difficult to establish a clinical standard. We have performed a systematic review of the literature to examine the various techniques for measurement of breast volume and to assess their accuracy and usefulness in clinical practice. Each of the fifteen studies we identified had more than ten live participants and assessed volume measurement accuracy using a gold-standard based on the volume, or mass, of a mastectomy specimen. Many of the studies from this review report large (>200 ml) uncertainty in breast volume and many fail to assess measurement accuracy using appropriate statistical tools. Of the methods assessed, MRI scanning consistently demonstrated the highest accuracy with three studies reporting errors lower than 10% for small (250 ml), medium (500 ml) and large (1000 ml) breasts. However, as a high-cost, non-routine assessment other methods may be more appropriate. PMID:27288864

  8. Evidence for Enhanced Interoceptive Accuracy in Professional Musicians

    PubMed Central

    Schirmer-Mokwa, Katharina L.; Fard, Pouyan R.; Zamorano, Anna M.; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A.

    2015-01-01

    Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836

  9. New Reconstruction Accuracy Metric for 3D PIV

    NASA Astrophysics Data System (ADS)

    Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.

  10. Changes in limb striking pattern: effects of speed and accuracy.

    PubMed

    Southard, D

    1989-12-01

    This study investigated the changes in an arm striking pattern as a result of practice and the effects of speed and accuracy requirements on such changes. The task was to strike a baseball-size foam ball from a batting tee adjusted to the height of each subject's iliac crest. Ten righthanded subjects, initially displaying an inefficient striking pattern, volunteered for this study. All subjects performed the task according to the following conditions: (1) speed, (2) accuracy, and (3) speed and accuracy. Each subject completed 10 trials in each condition (randomly ordered) for five consecutive days. A high-speed camera (64 fps) was used to photograph subjects' striking patterns for each condition over the 5-day period. Analysis of variance of joint angles at arm reversal and contact and velocity of hand relative to the glenohumeral axis at contact revealed that subjects initially constrained limb segments to act in a unitary fashion; then, with practice, a more efficient pattern was developed. The requirement of speed was found to enhance a change in limb configuration, whereas the requirement of accuracy, and subsequent reduction in speed, impeded the development of a more efficient striking pattern. Analysis of radial error revealed no differences in accuracies to the target by either condition or day of practice. A graphic analysis of segmental angular momentum versus relative time showed that joint angle changes allowed subjects to transfer angular momentum and thereby increase the velocity of the hand at contact. PMID:2489862

  11. Vertical resolution and accuracy of atmospheric infrared sounding spectrometers

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.

    1992-01-01

    A theoretical analysis is performed to evaluate the accuracy and vertical resolution of atmospheric profiles obtained with the HIRS/2, GOES I/M, and HIS instruments. In addition, a linear simultaneous retrieval algorithm is used with aircraft observations to validate the theoretical predictions. Both theoretical and observational results clearly indicate that the accuracy and vertical resolution of the retrieval profile would be improved by high spectral resolution and broad spectral coverage of infrared radiance measurements. The HIS is found to possess the equivalent of 11 pieces of temperature- and 9 pieces of water vapor-independent precise measurements. The characteristics for temperature include a vertical resolution of 1-6 km with an accuracy of 1 K and for water vapor a vertical resolution of 0.5-3.0 km with an accuracy of 3 K in dewpoint temperature. The HIS is a factor of 2-3 times better in vertical resolution and a factor of 2 times better in accuracy than the GOES I/M and HIRS/2 filter radiometers.

  12. COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy

    NASA Astrophysics Data System (ADS)

    Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun

    2014-09-01

    In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.

  13. Range accuracy analysis of streak tube imaging lidar systems

    NASA Astrophysics Data System (ADS)

    Ye, Guangchao; Fan, Rongwei; Chen, Zhaodong; Yuan, Wei; Chen, Deying; He, Ping

    2016-02-01

    Streak tube imaging lidar (STIL) is an active imaging system that has a high range accuracy and a wide range gate with the use of a pulsed laser transmitter and streak tube receiver to produce 3D range images. This work investigates the range accuracy performance of STIL systems based on a peak detection algorithm, taking into account the effects of blurring of the image. A theoretical model of the time-resolved signal distribution, including the static blurring width in addition to the laser pulse width, is presented, resulting in a modified range accuracy analysis. The model indicates that the static blurring width has a significant effect on the range accuracy, which is validated by both the simulation and experimental results. By using the optimal static blurring width, the range accuracies are enhanced in both indoor and outdoor experiments, with a stand-off distance of 10 m and 1700 m, respectively, and corresponding, best range errors of 0.06 m and 0.25 m were achieved in a daylight environment.

  14. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  15. Research of measuring accuracy of laser tracker system

    NASA Astrophysics Data System (ADS)

    Ouyang, Jianfei; Liang, Zhiyong; Zhang, Haixin; Yan, Yonggang

    2006-11-01

    This paper presents the achievement of a China NSFC project. The Laser Tracker System (LTS) is a portable 3D large size measuring system. The measuring conditions such as time and temperature can greatly affect the measuring accuracy of LTS. This paper pays a great attention to study how the time and temperature affect the measuring accuracy of LTS. Coordinate Measuring Machine (CMM) is employed as a high-level measuring instrument to validate LTS. The experiments have been done to find how the time and temperature affect the measuring accuracy of LTS. The experiments show the LTS can work well with the highest measuring accuracy just after three-hour warm-up. However, the LTS becomes unstable and the measuring accuracy decreases after 10 hours. The LTS needs calibration and compensation every 10 hours. The experiments show that the measuring error can be up to 29.6μm when the measuring temperature is 30.5°C even if the measuring error is less than 5.9μm while the temperature is between 20°C and 23.8°C. The research provides a very useful guidance for application of LTS.

  16. Cost and accuracy of advanced breeding trial designs in apple

    PubMed Central

    Harshman, Julia M; Evans, Kate M; Hardner, Craig M

    2016-01-01

    Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717

  17. Changes in limb striking pattern: effects of speed and accuracy.

    PubMed

    Southard, D

    1989-12-01

    This study investigated the changes in an arm striking pattern as a result of practice and the effects of speed and accuracy requirements on such changes. The task was to strike a baseball-size foam ball from a batting tee adjusted to the height of each subject's iliac crest. Ten righthanded subjects, initially displaying an inefficient striking pattern, volunteered for this study. All subjects performed the task according to the following conditions: (1) speed, (2) accuracy, and (3) speed and accuracy. Each subject completed 10 trials in each condition (randomly ordered) for five consecutive days. A high-speed camera (64 fps) was used to photograph subjects' striking patterns for each condition over the 5-day period. Analysis of variance of joint angles at arm reversal and contact and velocity of hand relative to the glenohumeral axis at contact revealed that subjects initially constrained limb segments to act in a unitary fashion; then, with practice, a more efficient pattern was developed. The requirement of speed was found to enhance a change in limb configuration, whereas the requirement of accuracy, and subsequent reduction in speed, impeded the development of a more efficient striking pattern. Analysis of radial error revealed no differences in accuracies to the target by either condition or day of practice. A graphic analysis of segmental angular momentum versus relative time showed that joint angle changes allowed subjects to transfer angular momentum and thereby increase the velocity of the hand at contact.

  18. Evidence for Enhanced Interoceptive Accuracy in Professional Musicians.

    PubMed

    Schirmer-Mokwa, Katharina L; Fard, Pouyan R; Zamorano, Anna M; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A

    2015-01-01

    Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836

  19. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  20. Diagnostic Accuracy of Procalcitonin in Bacterial Meningitis Versus Nonbacterial Meningitis

    PubMed Central

    Wei, Ting-Ting; Hu, Zhi-De; Qin, Bao-Dong; Ma, Ning; Tang, Qing-Qin; Wang, Li-Li; Zhou, Lin; Zhong, Ren-Qian

    2016-01-01

    Abstract Several studies have investigated the diagnostic accuracy of procalcitonin (PCT) levels in blood or cerebrospinal fluid (CSF) in bacterial meningitis (BM), but the results were heterogeneous. The aim of the present study was to ascertain the diagnostic accuracy of PCT as a marker for BM detection. A systematic search of the EMBASE, Scopus, Web of Science, and PubMed databases was performed to identify studies published before December 7, 2015 investigating the diagnostic accuracy of PCT for BM. The quality of the eligible studies was assessed using the revised Quality Assessment for Studies of Diagnostic Accuracy method. The overall diagnostic accuracy of PCT detection in CSF or blood was pooled using the bivariate model. Twenty-two studies involving 2058 subjects were included in this systematic review and meta-analysis. The overall specificities and sensitivities were 0.86 and 0.80 for CSF PCT, and 0.97 and 0.95 for blood PCT, respectively. Areas under the summary receiver operating characteristic curves were 0.90 and 0.98 for CSF PCT and blood PCT, respectively. The major limitation of this systematic review and meta-analysis was the small number of studies included and the heterogeneous diagnostic thresholds adopted by eligible studies. Our meta-analysis shows that PCT is a useful biomarker for BM diagnosis. PMID:26986140

  1. Robust methods for assessing the accuracy of linear interpolated DEM

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shi, Wenzhong; Liu, Eryong

    2015-02-01

    Methods for assessing the accuracy of a digital elevation model (DEM) with emphasis on robust methods have been studied in this paper. Based on the squared DEM residual population generated by the bi-linear interpolation method, three average-error statistics including (a) mean, (b) median, and (c) M-estimator are thoroughly investigated for measuring the interpolated DEM accuracy. Correspondingly, their confidence intervals are also constructed for each average error statistic to further evaluate the DEM quality. The first method mainly utilizes the student distribution while the second and third are derived from the robust theories. These innovative robust methods possess the capability of counteracting the outlier effects or even the skew distributed residuals in DEM accuracy assessment. Experimental studies using Monte Carlo simulation have commendably investigated the asymptotic convergence behavior of confidence intervals constructed by these three methods with the increase of sample size. It is demonstrated that the robust methods can produce more reliable DEM accuracy assessment results compared with those by the classical t-distribution-based method. Consequently, these proposed robust methods are strongly recommended for assessing DEM accuracy, particularly for those cases where the DEM residual population is evidently non-normal or heavily contaminated with outliers.

  2. High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.

    1997-01-01

    To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.

  3. Quality--a radiology imperative: interpretation accuracy and pertinence.

    PubMed

    Lee, Joseph K T

    2007-03-01

    Physicians as a group have neither consistently defined nor systematically measured the quality of medical practice. To referring clinicians and patients, a good radiologist is one who is accessible, recommends appropriate imaging studies, and provides timely consultation and reports with high interpretation accuracy. For determining the interpretation accuracy of cases with pathologic or surgical proof, the author proposes tracking data on positive predictive value, disease detection rates, and abnormal interpretation rates for individual radiologists. For imaging studies with no pathologic proof or adequate clinical follow-up, the author proposes measuring the concordance and discordance of the interpretations within a peer group. The monitoring of interpretation accuracy can be achieved through periodic imaging, pathologic correlation, regular peer review of randomly selected cases, or subscription to the ACR's RADPEER system. Challenges facing the implementation of an effective peer-review system include physician time, subjectivity in assessing discordant interpretations, lengthy and equivocal interpretations, and the potential misassignment of false-positive interpretations.

  4. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  5. Follow your breath: Respiratory interoceptive accuracy in experienced meditators

    PubMed Central

    DAUBENMIER, JENNIFER; SZE, JOCELYN; KERR, CATHERINE E.; KEMENY, MARGARET E.; MEHLING, WOLF

    2014-01-01

    Attention to internal bodily sensations is a core feature of mindfulness meditation. Previous studies have not detected differences in interoceptive accuracy between meditators and nonmeditators on heartbeat detection and perception tasks. We compared differences in respiratory interoceptive accuracy between meditators and nonmeditators in the ability to detect and discriminate respiratory resistive loads and sustain accurate perception of respiratory tidal volume during nondistracted and distracted conditions. Groups did not differ in overall performance on the detection and discrimination tasks; however, meditators were more accurate in discriminating the resistive load with the lowest ceiling effect. Meditators were also more accurate during the nondistracted tracking task at a lag time of 1 s following the breath. Results provide initial support for the notion that meditators have greater respiratory interoceptive accuracy compared to nonmeditators. PMID:23692525

  6. High accuracy wavelength calibration for a scanning visible spectrometer

    SciTech Connect

    Scotti, Filippo; Bell, Ronald E.

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  7. Accuracy enhancements for overset grids using a defect correction approach

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Pulliam, Thomas H.

    1994-01-01

    A defect-correction approach is investigated as a means of enhancing the accuracy of flow computations on overset grids. Typically, overset-grid techniques process and pass information only at grid boundaries. In the current approach, error corrections at all overlapped interior points are injected between grids by using a defect-correction scheme. In some cases this is found to enhance the overall accuracy of the overset-grid method. Locally refined overset grids can be used to provide an efficient solution-adaptation method. The defect correction can also be ultilized as an error-correction technique for a coarse grid by evaluating the residual using a fine base grid, but solving the implicit equations only on the coarse grid. Numerical examples include an accuracy and dissipation study of an unsteady decaying vortex flow, the flow over a NACA 0012 airfoil, and the flow over a mulit-element high-lift airfoil.

  8. Interpersonal orientation and the accuracy of personality judgments.

    PubMed

    Vogt, Dawne S; Colvin, C Randall

    2003-04-01

    Are those who are more invested in developing and maintaining interpersonal relationships able to provide more accurate judgments of others' personality characteristics? Previous research has produced mixed findings. In the present study, a conceptual framework was presented and methods were used that overcome many of the problems encountered in past research on judgmental accuracy. On four occasions, 102 judges watched a 12-min videotaped dyadic interaction and described the personality of a designated target person. Judges' personality characteristics were described by self, parents, and friends. Results revealed that psychological communion was positively associated with judges' accuracy in rating targets' personality characteristics. In addition, whereas women were more communal and provided more accurate judgments than men, the relationship between communion and accuracy held after controlling for the effect of gender. Finally, preliminary findings suggested that interpersonally oriented individuals may sometimes draw on information about themselves and about stereotypical others to facilitate accurate judgments of others.

  9. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  10. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    NASA Astrophysics Data System (ADS)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  11. Precision and accuracy in diffusion tensor magnetic resonance imaging.

    PubMed

    Jones, Derek K

    2010-04-01

    This article reviews some of the key factors influencing the accuracy and precision of quantitative metrics derived from diffusion magnetic resonance imaging data. It focuses on the study pipeline beginning at the choice of imaging protocol, through preprocessing and model fitting up to the point of extracting quantitative estimates for subsequent analysis. The aim was to provide the newcomers to the field with sufficient knowledge of how their decisions at each stage along this process might impact on precision and accuracy, to design their study/approach, and to use diffusion tensor magnetic resonance imaging in the clinic. More specifically, emphasis is placed on improving accuracy and precision. I illustrate how careful choices along the way can substantially affect the sample size needed to make an inference from the data.

  12. Factors affecting the accuracy of chest compression depth estimation

    PubMed Central

    Kang, Jung Hee; Cha, Won Chul; Chae, Minjung Kathy; Park, Hang A; Hwang, Sung Yeon; Jin, Sang Chan; Lee, Tae Rim; Shin, Tae Gun; Sim, Min Seob; Jo, Ik Joon; Song, Keun Jeong; Rhee, Joong Eui; Jeong, Yeon Kwon

    2014-01-01

    Objective We aimed to estimate the accuracy of visual estimation of chest compression depth and identify potential factors affecting accuracy. Methods This simulation study used a basic life support mannequin, the Ambu man. We recorded chest compression with 7 different depths from 1 to 7 cm. Each video clip was recorded for a cycle of compression. Three different viewpoints were used to record the video. After filming, 25 clips were randomly selected. Health care providers in an emergency department were asked to estimate the depth of compressions while watching the selected video clips. Examiner determinants such as experience and cardiopulmonary resuscitation training and environment determinants such as the location of the camera (examiner) were collected and analyzed. An estimated depth was considered correct if it was consistent with the one recorded. A multivariate analysis predicting the accuracy of compression depth estimation was performed. Results Overall, 103 subjects were enrolled in the study; 42 (40.8%) were physicians, 56 (54.4%) nurses, and 5 (4.8%) emergency medical technicians. The mean accuracy was 0.89 (standard deviation, 0.76). Among examiner determinants, only subjects’ occupation and clinical experience showed significant association with outcome (P=0.03 and P=0.08, respectively). All environmental determinants showed significant association with the outcome (all P<0.001). Multivariate analysis showed that accuracy rate was significantly associated with occupation, camera position, and compression depth. Conclusions The accuracy rate of chest compression depth estimation was 0.89 and was significantly related with examiner’s occupation, camera view position, and compression depth.

  13. Accuracy of discrimination, rate of responding, and resistance to change.

    PubMed Central

    Nevin, John A; Milo, Jessica; Odum, Amy L; Shahan, Timothy A

    2003-01-01

    Pigeons were trained on multiple schedules in which responding on a center key produced matching-to-sample trials according to the same variable-interval 30-s schedules in both components. Matching trials consisted of a vertical or tilted line sample on the center key followed by vertical and tilted comparisons on the side keys. Correct responses to comparison stimuli were reinforced with probability .80 in the rich component and .20 in the lean component. Baseline response rates and matching accuracies generally were higher in the rich component, consistent with previous research. When performance was disrupted by prefeeding, response-independent food during intercomponent intervals, intrusion of a delay between sample and comparison stimuli, or extinction, both response rates and matching accuracies generally decreased. Proportions of baseline response rate were greater in the rich component for all disrupters except delay, which had relatively small and inconsistent effects on response rate. By contrast, delay had large and consistent effects on matching accuracy, and proportions of baseline matching accuracy were greater in the rich component for all four disrupters. The dissociation of response rate and accuracy with delay reflects the localized impact of delay on matching performance. The similarity of the data for response rate and accuracy with prefeeding, response-independent food, and extinction shows that matching performance, like response rate, is more resistant to change in a rich than in a lean component. This result extends resistance to change analyses from the frequency of response emission to the degree of stimulus control, and suggests that the strength of discriminating, like the strength of responding, is positively related to rate of reinforcement. PMID:12908760

  14. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  15. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  16. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    SciTech Connect

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  17. Accuracy and Consistency of Respiratory Gating in Abdominal Cancer Patients

    SciTech Connect

    Ge, Jiajia; Santanam, Lakshmi; Yang, Deshan; Parikh, Parag J.

    2013-03-01

    Purpose: To evaluate respiratory gating accuracy and intrafractional consistency for abdominal cancer patients treated with respiratory gated treatment on a regular linear accelerator system. Methods and Materials: Twelve abdominal patients implanted with fiducials were treated with amplitude-based respiratory-gated radiation therapy. On the basis of daily orthogonal fluoroscopy, the operator readjusted the couch position and gating window such that the fiducial was within a setup margin (fiducial-planning target volume [f-PTV]) when RPM indicated “beam-ON.” Fifty-five pre- and post-treatment fluoroscopic movie pairs with synchronized respiratory gating signal were recorded. Fiducial motion traces were extracted from the fluoroscopic movies using a template matching algorithm and correlated with f-PTV by registering the digitally reconstructed radiographs with the fluoroscopic movies. Treatment was determined to be “accurate” if 50% of the fiducial area stayed within f-PTV while beam-ON. For movie pairs that lost gating accuracy, a MATLAB program was used to assess whether the gating window was optimized, the external-internal correlation (EIC) changed, or the patient moved between movies. A series of safety margins from 0.5 mm to 3 mm was added to f-PTV for reassessing gating accuracy. Results: A decrease in gating accuracy was observed in 44% of movie pairs from daily fluoroscopic movies of 12 abdominal patients. Three main causes for inaccurate gating were identified as change of global EIC over time (∼43%), suboptimal gating setup (∼37%), and imperfect EIC within movie (∼13%). Conclusions: Inconsistent respiratory gating accuracy may occur within 1 treatment session even with a daily adjusted gating window. To improve or maintain gating accuracy during treatment, we suggest using at least a 2.5-mm safety margin to account for gating and setup uncertainties.

  18. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  19. Presynaptic Spontaneous Activity Enhances the Accuracy of Latency Coding.

    PubMed

    Levakova, Marie; Tamborrino, Massimiliano; Kostal, Lubomir; Lansky, Petr

    2016-10-01

    The time to the first spike after stimulus onset typically varies with the stimulation intensity. Experimental evidence suggests that neural systems use such response latency to encode information about the stimulus. We investigate the decoding accuracy of the latency code in relation to the level of noise in the form of presynaptic spontaneous activity. Paradoxically, the optimal performance is achieved at a nonzero level of noise and suprathreshold stimulus intensities. We argue that this phenomenon results from the influence of the spontaneous activity on the stabilization of the membrane potential in the absence of stimulation. The reported decoding accuracy improvement represents a novel manifestation of the noise-aided signal enhancement. PMID:27557098

  20. An evaluation of information retrieval accuracy with simulated OCR output

    SciTech Connect

    Croft, W.B.; Harding, S.M.; Taghva, K.; Borsack, J.

    1994-12-31

    Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents can result in significant degradation.