Science.gov

Sample records for 10-fold cross-validation accuracy

  1. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    PubMed

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  2. Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data.

    PubMed

    Simon, Richard M; Subramanian, Jyothi; Li, Ming-Chung; Menezes, Supriya

    2011-05-01

    Developments in whole genome biotechnology have stimulated statistical focus on prediction methods. We review here methodology for classifying patients into survival risk groups and for using cross-validation to evaluate such classifications. Measures of discrimination for survival risk models include separation of survival curves, time-dependent ROC curves and Harrell's concordance index. For high-dimensional data applications, however, computing these measures as re-substitution statistics on the same data used for model development results in highly biased estimates. Most developments in methodology for survival risk modeling with high-dimensional data have utilized separate test data sets for model evaluation. Cross-validation has sometimes been used for optimization of tuning parameters. In many applications, however, the data available are too limited for effective division into training and test sets and consequently authors have often either reported re-substitution statistics or analyzed their data using binary classification methods in order to utilize familiar cross-validation. In this article we have tried to indicate how to utilize cross-validation for the evaluation of survival risk models; specifically how to compute cross-validated estimates of survival distributions for predicted risk groups and how to compute cross-validated time-dependent ROC curves. We have also discussed evaluation of the statistical significance of a survival risk model and evaluation of whether high-dimensional genomic data adds predictive accuracy to a model based on standard covariates alone.

  3. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  4. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  5. Assessing genomic prediction accuracy for Holstein sires using bootstrap aggregation sampling and leave-one-out cross validation.

    PubMed

    Mikshowsky, Ashley A; Gianola, Daniel; Weigel, Kent A

    2017-01-01

    Since the introduction of genome-enabled prediction for dairy cattle in 2009, genomic selection has markedly changed many aspects of the dairy genetics industry and enhanced the rate of response to selection for most economically important traits. Young dairy bulls are genotyped to obtain their genomic predicted transmitting ability (GPTA) and reliability (REL) values. These GPTA are a main factor in most purchasing, marketing, and culling decisions until bulls reach 5 yr of age and their milk-recorded offspring become available. At that time, daughter yield deviations (DYD) can be compared with the GPTA computed several years earlier. For most bulls, the DYD align well with the initial predictions. However, for some bulls, the difference between DYD and corresponding GPTA is quite large, and published REL are of limited value in identifying such bulls. A method of bootstrap aggregation sampling (bagging) using genomic BLUP (GBLUP) was applied to predict the GPTA of 2,963, 2,963, and 2,803 young Holstein bulls for protein yield, somatic cell score, and daughter pregnancy rate (DPR), respectively. For each trait, 50 bootstrap samples from a reference population comprising 2011 DYD of 8,610, 8,405, and 7,945 older Holstein bulls were used. Leave-one-out cross validation was also performed to assess prediction accuracy when removing specific bulls from the reference population. The main objectives of this study were (1) to assess the extent to which current REL values and alternative measures of variability, such as the bootstrap standard deviation (SD) of predictions, could detect bulls whose daughter performance deviates significantly from early genomic predictions, and (2) to identify factors associated with the reference population that inform about inaccurate genomic predictions. The SD of bootstrap predictions was a mildly useful metric for identifying bulls whose future daughter performance may deviate significantly from early GPTA for protein and DPR. Leave

  6. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  7. Cross-Validation of the Risk Matrix 2000 Sexual and Violent Scales

    ERIC Educational Resources Information Center

    Craig, Leam A.; Beech, Anthony; Browne, Kevin D.

    2006-01-01

    The predictive accuracy of the newly developed actuarial risk measures Risk Matrix 2000 Sexual/Violence (RMS, RMV) were cross validated and compared with two risk assessment measures (SVR-20 and Static-99) in a sample of sexual (n = 85) and nonsex violent (n = 46) offenders. The sexual offense reconviction rate for the sex offender group was 18%…

  8. Cross validation in LASSO and its acceleration

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-05-01

    We investigate leave-one-out cross validation (CV) as a determinator of the weight of the penalty term in the least absolute shrinkage and selection operator (LASSO). First, on the basis of the message passing algorithm and a perturbative discussion assuming that the number of observations is sufficiently large, we provide simple formulas for approximately assessing two types of CV errors, which enable us to significantly reduce the necessary cost of computation. These formulas also provide a simple connection of the CV errors to the residual sums of squares between the reconstructed and the given measurements. Second, on the basis of this finding, we analytically evaluate the CV errors when the design matrix is given as a simple random matrix in the large size limit by using the replica method. Finally, these results are compared with those of numerical simulations on finite-size systems and are confirmed to be correct. We also apply the simple formulas of the first type of CV error to an actual dataset of the supernovae.

  9. Modified cross-validation as a method for estimating parameter

    NASA Astrophysics Data System (ADS)

    Shi, Chye Rou; Adnan, Robiah

    2014-12-01

    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  10. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  11. Correcting evaluation bias of relational classifiers with network cross validation

    SciTech Connect

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; Wang, Tao

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess the models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).

  12. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  13. Cross-Validation for Nonlinear Mixed Effects Models

    PubMed Central

    Colby, Emily; Bair, Eric

    2013-01-01

    Cross-validation is frequently used for model selection in a variety of applications. However, it is difficult to apply cross-validation to mixed effects models (including nonlinear mixed effects models or NLME models) due to the fact that cross-validation requires “out-of-sample” predictions of the outcome variable, which cannot be easily calculated when random effects are present. We describe two novel variants of cross-validation that can be applied to nonlinear mixed effects models. One variant, where out-of-sample predictions are based on post hoc estimates of the random effects, can be used to select the overall structural model. Another variant, where cross-validation seeks to minimize the estimated random effects rather than the estimated residuals, can be used to select covariates to include in the model. We show that these methods produce accurate results in a variety of simulated data sets and apply them to two publicly available population pharmacokinetic data sets. PMID:23532511

  14. Genomic predictions in Angus cattle: comparisons of sample size, response variables, and clustering methods for cross-validation.

    PubMed

    Boddhireddy, P; Kelly, M J; Northcutt, S; Prayaga, K C; Rumph, J; DeNise, S

    2014-02-01

    Advances in genomics, molecular biology, and statistical genetics have created a paradigm shift in the way livestock producers pursue genetic improvement in their herds. The nexus of these technologies has resulted in combining genotypic and phenotypic information to compute genomically enhanced measures of genetic merit of individual animals. However, large numbers of genotyped and phenotyped animals are required to produce robust estimates of the effects of SNP that are summed together to generate direct genomic breeding values (DGV). Data on 11,756 Angus animals genotyped with the Illumina BovineSNP50 Beadchip were used to develop genomic predictions for 17 traits reported by the American Angus Association through Angus Genetics Inc. in their National Cattle Evaluation program. Marker effects were computed using a 5-fold cross-validation approach and a Bayesian model averaging algorithm. The accuracies were examined with EBV and deregressed EBV (DEBV) response variables and with K-means and identical by state (IBS)-based cross-validation methodologies. The cross-validation accuracies obtained using EBV response variables were consistently greater than those obtained using DEBV (average correlations were 0.64 vs. 0.57). The accuracies obtained using K-means cross-validation were consistently smaller than accuracies obtained with the IBS-based cross-validation approach (average correlations were 0.58 vs. 0.64 with EBV used as a response variable). Comparing the results from the current study with the results from a similar study consisting of only 2,253 records indicated that larger training population size resulted in higher accuracies in validation animals and explained on average 18% (69% improvement) additional genetic variance across all traits.

  15. A Cross-Validation Study of the Posttraumatic Growth Inventory

    ERIC Educational Resources Information Center

    Sheikh, Alia I.; Marotta, Sylvia A.

    2005-01-01

    This article is a cross-validation of R. G. Tedeschi and L. G. Calhoun's (1996) original study of the development of the Posttraumatic Growth Inventory (PTGI). It describes several psychometric properties of scores on the PTGI in a sample of middle- to old-aged adults with a history of cardiovascular disease. The results did not support the…

  16. Attrition from an Adolescent Addiction Treatment Program: A Cross Validation.

    ERIC Educational Resources Information Center

    Mathisen, Kenneth S.; Meyers, Kathleen

    Treatment attrition is a major problem for programs treating adolescent substance abusers. To isolate and cross validate factors which are predictive of addiction treatment attrition among adolescent substance abusers, screening interview and diagnostic variables from 119 adolescent in-patients were submitted to a discriminant equation analysis.…

  17. The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).

    ERIC Educational Resources Information Center

    Berryman, Joan D.; Neal, W. R. Jr.

    1980-01-01

    Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…

  18. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  19. Cross-Validation of Predictor Equations for Armor Crewman Performance

    DTIC Science & Technology

    1980-01-01

    assignment rather than specify the contents of the test battery to be used. Eaton, Bessemer , and Kristiansen (1979) evaluated the relationship... Bessemer , & Kristiansen, 1979). Consideration of their results suggested the potential for two cross-validation strategies. The first was to attempt to...formulas and their correlations with the criteria data from Eaton, Bessemer , and Kristiansen (1979) are shown in Table 2. By comparing Tables 1 and 2, one

  20. Feature selection based on fusing mutual information and cross-validation

    NASA Astrophysics Data System (ADS)

    Li, Wei-wei; Liu, Chun-ping; Chen, Ning-qiang; Wang, Zhao-hui

    2009-10-01

    Many algorithms have been proposed in literature for feature selection; unfortunately, none of them ensures a perfect result. Here we propose an adaptive sequential floating forward feature selection algorithm which achieves accuracy results higher than that of already existing algorithms and naturally adaptive for implementation into the number of best feature subset to be selected. The basic idea of the proposed algorithm is to adopt two relatively well-settled algorithms for the problem at hand and combine mutual information and Cross-Validation through suitable fusion techniques, with the aim of taking advantage of the adopted algorithms' capabilities, at the same time, limiting their deficiencies. This method adaptively obtains the number of features to be selected according to dimensions of original feature set, and Dempster-Shafer Evidential Theory is used to fuse Max-Relevance, Min-Redundancy and CVFS. Extensive experiments show that the higher accuracy of classification and the less redundancy of features could be achieved.

  1. Cross-validated detection of crack initiation in aerospace materials

    NASA Astrophysics Data System (ADS)

    Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios

    2014-03-01

    A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.

  2. The Religious Support Scale: construction, validation, and cross-validation.

    PubMed

    Fiala, William E; Bjorck, Jeffrey P; Gorsuch, Richard

    2002-12-01

    Cutrona and Russell's social support model was used to develop a religious support measure (C. E. Cutrona & D. W. Russell, 1987), including 3 distinct but related subscales respectively measuring support from God, the congregation, and church leadership. Factor analyses with the main sample's data (249 Protestants) and cross-validation (93 additional Protestants) supported the scales' reliability and validity. All 3 types of religious support were related to lower depression and greater life satisfaction. Moreover, several relationships between the 3 subscales and psychological functioning variables remained significant after controlling for variance because of church attendance and social support. Results suggest that religious attendance does not automatically imply religious support, and that religious support can provide unique resources for religious persons, above and beyond those furnished by social support. Findings are discussed regarding relevance to community psychology.

  3. Robust cross-validation of linear regression QSAR models.

    PubMed

    Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny

    2008-10-01

    A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets.

  4. Cross-validation of Waist-Worn GENEA Accelerometer Cut-Points

    PubMed Central

    Welch, Whitney A.; Bassett, David R.; Freedson, Patty S.; John, Dinesh; Steeves, Jeremy A.; Conger, Scott A.; Ceaser, Tyrone G.; Howe, Cheryl A.; Sasaki, Jeffer E.

    2014-01-01

    Purpose The purpose of this study was to determine the classification accuracy of the waist GENEA cut-points developed by Esliger et al. for predicting intensity categories across a range of lifestyle activities. Methods Each participant performed one of two routines, consisting of seven lifestyle activities (home/office, ambulatory, and sport). The GENEA was worn on the right waist and oxygen uptake was continuously measured using the Oxycon mobile. A one-way chi-square was used to determine the classification accuracy of the GENEA cut-points. Cross tabulation tables provided information on under- and over-estimations, and sensitivity and specificity analyses of the waist cut-points were also performed. Results Spearman’s rank order correlation for the GENEA SVMgs and Oxycon mobile MET values was 0.73. For all activities combined, the GENEA accurately predicted intensity classification 55.3% of the time, and increased to 58.3% when stationary cycling was removed from the analysis. The sensitivity of the cut-points for the four intensity categories ranged from 0.244 to 0.958 and the specificity ranged from 0.576 to 0.943. Conclusion In this cross-validation study, the proposed GENEA cut-points had a low overall accuracy rate for classifying intensity (55.3%) when engaging in 14 different lifestyle activities. PMID:24496118

  5. SU-E-T-231: Cross-Validation of 3D Gamma Comparison Tools

    SciTech Connect

    Alexander, KM; Jechel, C; Pinter, C; Lasso, A; Fichtinger, G; Salomons, G; Schreiner, LJ

    2015-06-15

    Purpose: Moving the computational analysis for 3D gel dosimetry into the 3D Slicer (www.slicer.org) environment has made gel dosimetry more clinically accessible. To ensure accuracy, we cross-validate the 3D gamma comparison module in 3D Slicer with an independently developed algorithm using simulated and measured dose distributions. Methods: Two reference dose distributions were generated using the Varian Eclipse treatment planning system. The first distribution consisted of a four-field box irradiation delivered to a plastic water phantom and the second, a VMAT plan delivered to a gel dosimeter phantom. The first reference distribution was modified within Eclipse to create an evaluated dose distribution by spatially shifting one field by 3mm, increasing the monitor units of the second field, applying a dynamic wedge for the third field, and leaving the fourth field unchanged. The VMAT plan was delivered to a gel dosimeter and the evaluated dose in the gel was calculated from optical CT measurements. Results from the gamma comparison tool built into the SlicerRT toolbox were compared to results from our in-house gamma algorithm implemented in Matlab (via MatlabBridge in 3D Slicer). The effects of noise, resolution and the exchange of reference and evaluated designations on the gamma comparison were also examined. Results: Perfect agreement was found between the gamma results obtained using the SlicerRT tool and our Matlab implementation for both the four-field box and gel datasets. The behaviour of the SlicerRT comparison with respect to changes in noise, resolution and the role of the reference and evaluated dose distributions was consistent with previous findings. Conclusion: Two independently developed gamma comparison tools have been cross-validated and found to be identical. As we transition our gel dosimetry analysis from Matlab to 3D Slicer, this validation serves as an important test towards ensuring the consistency of dose comparisons using the 3D Slicer

  6. Cross-validating a bidimensional mathematics anxiety scale.

    PubMed

    Haiyan Bai

    2011-03-01

    The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p < .001), and positive discrimination power indicated that MAS-R is a psychometrically reliable and valid instrument for measuring mathematics anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.

  7. Splenectomy Causes 10-Fold Increased Risk of Portal Venous System Thrombosis in Liver Cirrhosis Patients.

    PubMed

    Qi, Xingshun; Han, Guohong; Ye, Chun; Zhang, Yongguo; Dai, Junna; Peng, Ying; Deng, Han; Li, Jing; Hou, Feifei; Ning, Zheng; Zhao, Jiancheng; Zhang, Xintong; Wang, Ran; Guo, Xiaozhong

    2016-07-19

    BACKGROUND Portal venous system thrombosis (PVST) is a life-threatening complication of liver cirrhosis. We conducted a retrospective study to comprehensively analyze the prevalence and risk factors of PVST in liver cirrhosis. MATERIAL AND METHODS All cirrhotic patients without malignancy admitted between June 2012 and December 2013 were eligible if they underwent contrast-enhanced CT or MRI scans. Independent predictors of PVST in liver cirrhosis were calculated in multivariate analyses. Subgroup analyses were performed according to the severity of PVST (any PVST, main portal vein [MPV] thrombosis >50%, and clinically significant PVST) and splenectomy. Odds ratios (ORs) and 95% confidence intervals (CIs) were reported. RESULTS Overall, 113 cirrhotic patients were enrolled. The prevalence of PVST was 16.8% (19/113). Splenectomy (any PVST: OR=11.494, 95%CI=2.152-61.395; MPV thrombosis >50%: OR=29.987, 95%CI=3.247-276.949; clinically significant PVST: OR=40.415, 95%CI=3.895-419.295) and higher hemoglobin (any PVST: OR=0.974, 95%CI=0.953-0.996; MPV thrombosis >50%: OR=0.936, 95%CI=0.895-0.980; clinically significant PVST: OR=0.935, 95%CI=0.891-0.982) were the independent predictors of PVST. The prevalence of PVST was 13.3% (14/105) after excluding splenectomy. Higher hemoglobin was the only independent predictor of MPV thrombosis >50% (OR=0.952, 95%CI=0.909-0.997). No independent predictors of any PVST or clinically significant PVST were identified in multivariate analyses. Additionally, PVST patients who underwent splenectomy had a significantly higher proportion of clinically significant PVST but lower MELD score than those who did not undergo splenectomy. In all analyses, the in-hospital mortality was not significantly different between cirrhotic patient with and without PVST. CONCLUSIONS Splenectomy may increase by at least 10-fold the risk of PVST in liver cirrhosis independent of severity of liver dysfunction.

  8. Cross-validation pitfalls when selecting and assessing regression and classification models

    PubMed Central

    2014-01-01

    Background We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. Methods We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. Results We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. Conclusions We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error. PMID:24678909

  9. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  10. Block-Regularized m × 2 Cross-Validated Estimator of the Generalization Error.

    PubMed

    Wang, Ruibo; Wang, Yu; Li, Jihong; Yang, Xingli; Yang, Jing

    2017-02-01

    A cross-validation method based on [Formula: see text] replications of two-fold cross validation is called an [Formula: see text] cross validation. An [Formula: see text] cross validation is used in estimating the generalization error and comparing of algorithms' performance in machine learning. However, the variance of the estimator of the generalization error in [Formula: see text] cross validation is easily affected by random partitions. Poor data partitioning may cause a large fluctuation in the number of overlapping samples between any two training (test) sets in [Formula: see text] cross validation. This fluctuation results in a large variance in the [Formula: see text] cross-validated estimator. The influence of the random partitions on variance becomes serious as [Formula: see text] increases. Thus, in this study, the partitions with a restricted number of overlapping samples between any two training (test) sets are defined as a block-regularized partition set. The corresponding cross validation is called block-regularized [Formula: see text] cross validation ([Formula: see text] BCV). It can effectively reduce the influence of random partitions. We prove that the variance of the [Formula: see text] BCV estimator of the generalization error is smaller than the variance of [Formula: see text] cross-validated estimator and reaches the minimum in a special situation. An analytical expression of the variance can also be derived in this special situation. This conclusion is validated through simulation experiments. Furthermore, a practical construction method of [Formula: see text] BCV by a two-level orthogonal array is provided. Finally, a conservative estimator is proposed for the variance of estimator of the generalization error.

  11. Density-preserving sampling: robust and efficient alternative to cross-validation for error estimation.

    PubMed

    Budka, Marcin; Gabrys, Bogdan

    2013-01-01

    Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.

  12. Cross-validation of a Shortened Battery for the Assessment of Dysexecutive Disorders in Alzheimer Disease.

    PubMed

    Godefroy, Olivier; Martinaud, Olivier; Verny, Marc; Mosca, Chrystèle; Lenoir, Hermine; Bretault, Eric; Devendeville, Agnès; Diouf, Momar; Pere, Jean-Jacques; Bakchine, Serge; Delabrousse-Mayoux, Jean-Philippe; Roussel, Martine

    2016-01-01

    The frequency of executive disorders in mild-to-moderate Alzheimer disease (AD) has been demonstrated by the application of a comprehensive battery. The present study analyzed data from 2 recent multicenter studies based on the same executive battery. The objective was to derive a shortened battery by using the GREFEX population as a training dataset and by cross-validating the results in the REFLEX population. A total of 102 AD patients of the GREFEX study (MMSE=23.2±2.9) and 72 patients of the REFLEX study (MMSE=20.8±3.5) were included. Tests were selected and receiver operating characteristic curves were generated relative to the performance of 780 controls from the GREFEX study. Stepwise logistic regression identified 3 cognitive tests (Six Elements Task, categorical fluency and Trail Making Test B error) and behavioral disorders globally referred as global hypoactivity (P=0.0001, all). This shortened battery was as accurate as the entire GREFEX battery in diagnosing dysexecutive disorders in both training group and the validation group. Bootstrap procedure confirmed the stability of AUC. A shortened battery based on 3 cognitive tests and 3 behavioral domains provides a high diagnosis accuracy of executive disorders in mild-to-moderate AD.

  13. Comparison of cross-validation and bootstrap aggregating for building a seasonal streamflow forecast model

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2016-10-01

    Based on a hindcast experiment for the period 1982-2013 in 66 sub-catchments of the Swiss Rhine, the present study compares two approaches of building a regression model for seasonal streamflow forecasting. The first approach selects a single "best guess" model, which is tested by leave-one-out cross-validation. The second approach implements the idea of bootstrap aggregating, where bootstrap replicates are employed to select several models, and out-of-bag predictions provide model testing. The target value is mean streamflow for durations of 30, 60 and 90 days, starting with the 1st and 16th day of every month. Compared to the best guess model, bootstrap aggregating reduces the mean squared error of the streamflow forecast by seven percent on average. Thus, if resampling is anyway part of the model building procedure, bootstrap aggregating seems to be a useful strategy in statistical seasonal streamflow forecasting. Since the improved accuracy comes at the cost of a less interpretable model, the approach might be best suited for pure prediction tasks, e.g. as in operational applications.

  14. Cross-validation of resting metabolic rate prediction equations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: Knowledge of the resting metabolic rate (RMR) is necessary for determining individual total energy requirements. Measurement of RMR is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, the accuracy of these equations...

  15. A cross-validated cytoarchitectonic atlas of the human ventral visual stream.

    PubMed

    Rosenke, M; Weiner, K S; Barnett, M A; Zilles, K; Amunts, K; Goebel, R; Grill-Spector, K

    2017-02-14

    The human ventral visual stream consists of several areas considered processing stages essential for perception and recognition. A fundamental microanatomical feature differentiating areas is cytoarchitecture, which refers to the distribution, size, and density of cells across cortical layers. Because cytoarchitectonic structure is measured in 20-micron-thick histological slices of postmortem tissue, it is difficult to assess (a) how anatomically consistent these areas are across brains and (b) how they relate to brain parcellations obtained with prevalent neuroimaging methods, acquired at the millimeter and centimeter scale. Therefore, the goal of this study was to (a) generate a cross-validated cytoarchitectonic atlas of the human ventral visual stream on a whole brain template that is commonly used in neuroimaging studies and (b) to compare this atlas to a recently published retinotopic parcellation of visual cortex (Wang, 2014). To achieve this goal, we generated an atlas of eight cytoarchitectonic areas: four areas in the occipital lobe (hOc1-hOc4v) and four in the fusiform gyrus (FG1-FG4) and tested how alignment technique affects the accuracy of the atlas. Results show that both cortex-based alignment (CBA) and nonlinear volumetric alignment (NVA) generate an atlas with better cross-validation performance than affine volumetric alignment (AVA). Additionally, CBA outperformed NVA in 6/8 of the cytoarchitectonic areas. Finally, the comparison of the cytoarchitectonic atlas to a retinotopic atlas shows a clear correspondence between cytoarchitectonic and retinotopic areas in the ventral visual stream. The successful performance of CBA suggests a coupling between cytoarchitectonic areas and macroanatomical landmarks in the human ventral visual stream, and furthermore that this coupling can be utilized towards generating an accurate group atlas. In addition, the coupling between cytoarchitecture and retinotopy highlights the potential use of this atlas in

  16. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    Computer Sciences (with a minor in Mathematical Statistics ) at the University of Wisconsin-Madison in 2001. Broadly speaking, Tina’s research interests...These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been...the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it

  17. Estimators of the Squared Cross-Validity Coefficient: A Monte Carlo Investigation.

    ERIC Educational Resources Information Center

    And Others; Drasgow, Fritz

    1979-01-01

    A Monte Carlo experiment was used to evaluate four procedures for estimating the population squared cross-validity of a sample least squares regression equation. One estimator was particularly recommended. (Author/BH)

  18. The Performance of Cross-Validation Indices Used to Select among Competing Covariance Structure Models under Multivariate Nonnormality Conditions

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Stapleton, Laura M.

    2006-01-01

    Cudeck and Browne (1983) proposed using cross-validation as a model selection technique in structural equation modeling. The purpose of this study is to examine the performance of eight cross-validation indices under conditions not yet examined in the relevant literature, such as nonnormality and cross-validation design. The performance of each…

  19. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  20. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  1. Prediction of maize single cross hybrids using the total effects of associated markers approach assessed by cross-validation and regional trials.

    PubMed

    Melo, Wagner Mateus Costa; Pinho, Renzo Garcia Von; Balestre, Marcio

    2014-01-01

    The present study aimed to predict the performance of maize hybrids and assess whether the total effects of associated markers (TEAM) method can correctly predict hybrids using cross-validation and regional trials. The training was performed in 7 locations of Southern Brazil during the 2010/11 harvest. The regional assays were conducted in 6 different South Brazilian locations during the 2011/12 harvest. In the training trial, 51 lines from different backgrounds were used to create 58 single cross hybrids. Seventy-nine microsatellite markers were used to genotype these 51 lines. In the cross-validation method the predictive accuracy ranged from 0.10 to 0.96, depending on the sample size. Furthermore, the accuracy was 0.30 when the values of hybrids that were not used in the training population (119) were predicted for the regional assays. Regarding selective loss, the TEAM method correctly predicted 50% of the hybrids selected in the regional assays. There was also loss in only 33% of cases; that is, only 33% of the materials predicted to be good in training trial were considered to be bad in regional assays. Our results show that the predictive validation of different crop conditions is possible, and the cross-validation results strikingly represented the field performance.

  2. Airborne environmental endotoxin: a cross-validation of sampling and analysis techniques.

    PubMed Central

    Walters, M; Milton, D; Larsson, L; Ford, T

    1994-01-01

    A standard method for measurement of airborne environmental endotoxin was developed and field tested in a fiberglass insulation-manufacturing facility. This method involved sampling with a capillary-pore membrane filter, extraction in buffer using a sonication bath, and analysis by the kinetic-Limulus assay with resistant-parallel-line estimation (KLARE). Cross-validation of the extraction and assay method was performed by comparison with methanolysis of samples followed by 3-hydroxy fatty acid (3-OHFA) analysis by gas chromatography-mass spectrometry. Direct methanolysis of filter samples and methanolysis of buffer extracts of the filters yielded similar 3-OHFA content (P = 0.72); the average difference was 2.1%. Analysis of buffer extracts for endotoxin content by the KLARE method and by gas chromatography-mass spectrometry for 3-OHFA content produced similar results (P = 0.23); the average difference was 0.88%. The source of endotoxin was gram-negative bacteria growing in recycled washwater used to clean the insulation-manufacturing equipment. The endotoxin and bacteria become airborne during spray cleaning operations. The types of 3-OHFAs in bacteria cultured from the washwater, present in the washwater and in the air, were similar. Virtually all of the bacteria cultured from air and water were gram negative composed mostly of two species, Deleya aesta and Acinetobacter johnsonii. Airborne countable bacteria correlated well with endotoxin (r2 = 0.64). Replicate sampling showed that results with the standard sampling, extraction, and Limulus assay by the KLARE method were highly reproducible (95% confidence interval for endotoxin measurement +/- 0.28 log10). These results demonstrate the accuracy, precision, and sensitivity of the standard procedure proposed for airborne environmental endotoxin. PMID:8161191

  3. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models.

    PubMed

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference procedures of the squared multiple correlation coefficient have been extensively developed. In contrast, a full range of statistical methods for the analysis of the squared cross-validity coefficient is considerably far from complete. This article considers a distinct expression for the definition of the squared cross-validity coefficient as the direct connection and monotone transformation to the squared multiple correlation coefficient. Therefore, all the currently available exact methods for interval estimation, power calculation, and sample size determination of the squared multiple correlation coefficient are naturally modified and extended to the analysis of the squared cross-validity coefficient. The adequacies of the existing approximate procedures and the suggested exact method are evaluated through a Monte Carlo study. Furthermore, practical applications in areas of psychology and management are presented to illustrate the essential features of the proposed methodologies. The first empirical example uses 6 control variables related to driver characteristics and traffic congestion and their relation to stress in bus drivers, and the second example relates skills, cognitive performance, and personality to team performance measures. The results in this article can facilitate the recommended practice of cross-validation in psychological and other areas of social science research.

  4. Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong

    ERIC Educational Resources Information Center

    Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger

    2016-01-01

    Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…

  5. A Cross-Validation of MMPI Scales of Aggression on Male Criminal Criterion Groups

    ERIC Educational Resources Information Center

    Deiker, Thomas E.

    1974-01-01

    The 13 basic Minnesota Multiphasic Personality Inventory (MMPI) scales, 21 experimental scales of hostility and control, and four response-bias scales are cross-validated on 168 male criminals assigned to four aggressive criterion groups (nonviolent, threat, battery, and homicide). (Author)

  6. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  7. The Employability of Psychologists in Academic Settings: A Cross-Validation.

    ERIC Educational Resources Information Center

    Quereshi, M. Y.

    1983-01-01

    Analyzed the curriculum vitae (CV) of 117 applicants for the position of assistant professor of psychology to yield four cross-validated factors. Comparisons of the results with those of four years ago indicated considerable stability of the factors. Scholarly publications remain an important factor. (JAC)

  8. A Cross-Validation of Sex Differences in the Expression of Depression.

    ERIC Educational Resources Information Center

    Chino, Allan F.; Funabiki, Dean

    1984-01-01

    Presents results of a cross-validational test of previous findings that men and women express depression differently. Reports that when depressed, females are more prone to somatic symptoms, self-deprecating statements, and less selectiveness than males in seeking out others. Qualifies findings, however, by posting a possible gap between reported…

  9. Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

    ERIC Educational Resources Information Center

    Acar, Tu¨lin

    2014-01-01

    In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…

  10. Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth

    ERIC Educational Resources Information Center

    Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…

  11. A New Symptom Model for Autism Cross-Validated in an Independent Sample

    ERIC Educational Resources Information Center

    Boomsma, A.; Van Lang, N. D. J.; De Jonge, M. V.; De Bildt, A. A.; Van Engeland, H.; Minderaa, R. B.

    2008-01-01

    Background: Results from several studies indicated that a symptom model other than the DSM triad might better describe symptom domains of autism. The present study focused on a) investigating the stability of a new symptom model for autism by cross-validating it in an independent sample and b) examining the invariance of the model regarding three…

  12. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  13. Cross-Validation of the Quick Word Test as an Estimator of Adult Mental Ability

    ERIC Educational Resources Information Center

    Grotelueschen, Arden; McQuarrie, Duncan

    1970-01-01

    This report provides additional evidence that the Quick Word Test (Level 2, Form AM) is valid for estimating adult mental ability as defined by the Wechsler Adult Intelligence Scale. The validation sample is also described to facilitate use of the conversion table developed in the cross-validation analysis. (Author/LY)

  14. Actuarial assessment of sex offender recidivism risk: a cross-validation of the RRASOR and the Static-99 in Sweden.

    PubMed

    Sjöstedt, G; Långström, N

    2001-12-01

    We cross-validated two actuarial risk assessment tools, the RRASOR (R. K. Hanson, 1997) and the Static-99 (R. K. Hanson & D. Thornton, 1999), in a retrospective follow-up (mean follow-up time = 3.69 years) of all sex offenders released from Swedish prisons during 1993-1997 (N = 1,400, all men, age > or =18 years). File-based data were collected by a researcher blind to the outcome (registered criminal recidivism), and individual risk factors as well as complete instrument characteristics were explored. Both the RRASOR and the Static-99 showed similar and moderate predictive accuracy for sexual reconvictions whereas the Static-99 exhibited a significantly higher accuracy for the prediction of any violent recidivism as compared to the RRASOR. Although particularly the Static-99 proved moderately robust as an actuarial measure of recidivism risk among sexual offenders in Sweden, both procedures may need further evaluation, for example, with sex offender subpopulations differing ethnically or with respect to offense characteristics. The usefulness of actuarial methods for the assessment of sex offender recidivism risk is discussed in the context of current practice.

  15. Methodology Review: Estimation of Population Validity and Cross-Validity, and the Use of Equal Weights in Prediction.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1997-01-01

    This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)

  16. Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…

  17. Automatic regularization parameter selection by generalized cross-validation for total variational Poisson noise removal.

    PubMed

    Zhang, Xiongjun; Javidi, Bahram; Ng, Michael K

    2017-03-20

    In this paper, we propose an alternating minimization algorithm with an automatic selection of the regularization parameter for image reconstruction of photon-counted images. By using the generalized cross-validation technique, the regularization parameter can be updated in the iterations of the alternating minimization algorithm. Experimental results show that our proposed algorithm outperforms the two existing methods, the maximum likelihood expectation maximization estimator with total variation regularization and the primal dual method, where the parameters must be set in advance.

  18. Application of robust Generalised Cross-Validation to the inverse problem of electrocardiology.

    PubMed

    Barnes, Josef P; Johnston, Peter R

    2016-02-01

    Robust Generalised Cross-Validation was proposed recently as a method for determining near optimal regularisation parameters in inverse problems. It was introduced to overcome a problem with the regular Generalised Cross-Validation method in which the function that is minimised to obtain the regularisation parameter often has a broad, flat minimum, resulting in a poor estimate for the parameter. The robust method defines a new function to be minimised which has a narrower minimum, but at the expense of introducing a new parameter called the robustness parameter. In this study, the Robust Generalised Cross-Validation method is applied to the inverse problem of electrocardiology. It is demonstrated that, for realistic situations, the robustness parameter can be set to zero. With this choice of robustness parameter, it is shown that the robust method is able to obtain estimates of the regularisation parameter in the inverse problem of electrocardiology that are comparable to, or better than, many of the standard methods that are applied to this inverse problem.

  19. Cross-validation of matching correlation analysis by resampling matching weights.

    PubMed

    Shimodaira, Hidetoshi

    2016-03-01

    The strength of association between a pair of data vectors is represented by a nonnegative real number, called matching weight. For dimensionality reduction, we consider a linear transformation of data vectors, and define a matching error as the weighted sum of squared distances between transformed vectors with respect to the matching weights. Given data vectors and matching weights, the optimal linear transformation minimizing the matching error is solved by the spectral graph embedding of Yan et al. (2007). This method is a generalization of the canonical correlation analysis, and will be called as matching correlation analysis (MCA). In this paper, we consider a novel sampling scheme where the observed matching weights are randomly sampled from underlying true matching weights with small probability, whereas the data vectors are treated as constants. We then investigate a cross-validation by resampling the matching weights. Our asymptotic theory shows that the cross-validation, if rescaled properly, computes an unbiased estimate of the matching error with respect to the true matching weights. Existing ideas of cross-validation for resampling data vectors, instead of resampling matching weights, are not applicable here. MCA can be used for data vectors from multiple domains with different dimensions via an embarrassingly simple idea of coding the data vectors. This method will be called as cross-domain matching correlation analysis (CDMCA), and an interesting connection to the classical associative memory model of neural networks is also discussed.

  20. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  1. A statistical method (cross-validation) for bone loss region detection after spaceflight

    PubMed Central

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.

    2010-01-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144

  2. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  3. Global mapping of highly pathogenic avian influenza H5N1 and H5Nx clade 2.3.4.4 viruses with spatial cross-validation

    PubMed Central

    Dhingra, Madhur S; Artois, Jean; Robinson, Timothy P; Linard, Catherine; Chaiban, Celia; Xenarios, Ioannis; Engler, Robin; Liechti, Robin; Kuznetsov, Dmitri; Xiao, Xiangming; Dobschuetz, Sophie Von; Claes, Filip; Newman, Scott H; Dauphin, Gwenaëlle; Gilbert, Marius

    2016-01-01

    Global disease suitability models are essential tools to inform surveillance systems and enable early detection. We present the first global suitability model of highly pathogenic avian influenza (HPAI) H5N1 and demonstrate that reliable predictions can be obtained at global scale. Best predictions are obtained using spatial predictor variables describing host distributions, rather than land use or eco-climatic spatial predictor variables, with a strong association with domestic duck and extensively raised chicken densities. Our results also support a more systematic use of spatial cross-validation in large-scale disease suitability modelling compared to standard random cross-validation that can lead to unreliable measure of extrapolation accuracy. A global suitability model of the H5 clade 2.3.4.4 viruses, a group of viruses that recently spread extensively in Asia and the US, shows in comparison a lower spatial extrapolation capacity than the HPAI H5N1 models, with a stronger association with intensively raised chicken densities and anthropogenic factors. DOI: http://dx.doi.org/10.7554/eLife.19571.001 PMID:27885988

  4. Predicting IQ change from brain structure: A cross-validation study

    PubMed Central

    Price, C.J.; Ramsden, S.; Hope, T.M.H.; Friston, K.J.; Seghier, M.L.

    2013-01-01

    Procedures that can predict cognitive abilities from brain imaging data are potentially relevant to educational assessments and studies of functional anatomy in the developing brain. Our aim in this work was to quantify the degree to which IQ change in the teenage years could be predicted from structural brain changes. Two well-known k-fold cross-validation analyses were applied to data acquired from 33 healthy teenagers – each tested at Time 1 and Time 2 with a 3.5 year interval. One approach, a Leave-One-Out procedure, predicted IQ change for each subject on the basis of structural change in a brain region that was identified from all other subjects (i.e., independent data). This approach predicted 53% of verbal IQ change and 14% of performance IQ change. The other approach used half the sample, to identify regions for predicting IQ change in the other half (i.e., a Split half approach); however – unlike the Leave-One-Out procedure – regions identified using half the sample were not significant. We discuss how these out-of-sample estimates compare to in-sample estimates; and draw some recommendations for k-fold cross-validation procedures when dealing with small datasets that are typical in the neuroimaging literature. PMID:23567505

  5. Interrelations between temperament, character, and parental rearing among delinquent adolescents: a cross-validation.

    PubMed

    Richter, Jörg; Krecklow, Beate; Eisemann, Matrin

    2002-01-01

    We performed a cross-validation of results from investigations in juvenile delinquents in Russia and Germany concerning relationships of personality characteristics in terms of temperament and character with parental rearing. Both studies used the Temperament and Character Inventory (TCI) based on Cloninger's psychobiological theory, and the Own Memories on Parenting (Egna Minnen Beträffande Uppfostran-Swedish [EMBU]) questionnaire on parental rearing based on Perris' vulnerability model. The inter-relatedness of parental rearing, temperament, and character traits in socially normally integrated adolescents, as well as in delinquent adolescents, implying direct and indirect pathways from personality and parental rearing to delinquency, could be cross-validated. Differences between delinquents and socially normally integrated adolescents are rather based on different levels of expressions of various temperament traits, harm avoidance and novelty seeking in particular, and the character trait self-directedness, as well as on parental rearing behavior (predominantly parental rejection and emotional warmth) than on different structures within related developmental processes.

  6. Evaluation of a cross-validation stopping rule in MLE SPECT reconstruction.

    PubMed

    Falcón, C; Juvells, I; Pavía, J; Ros, D

    1998-05-01

    One of the problems in the routine use of the maximum-likelihood estimator method-expectation maximization (MLE-EM) algorithm is to decide when the iterative process should be stopped. We studied a cross-validation stopping rule to assess its usefulness in SPECT. We tested this stopping rule criterion in the MLE-EM algorithm without acceleration as well as in two accelerating algorithms, the successive substitutions algorithm (SSA) and the additive algorithm (AA). Different values of an acceleration factor were tested in SSA and AA. Our results from numerical and physical phantoms show that the stopping rule based on the cross-validation ratio (CVR) takes into account the similarity of the reconstructed image to the ideal image, noise and the contrast of the image. CVR yields reconstructed images with balanced values of the figures of merit (FOM) employed to assess the image quality. The CVR criterion can be used in the original MLE-EM algorithm as well as in SSA and AA. The reconstructed images obtained with SSA and AA showed FOM values that were very similar. These results were justified by considering AA to be an approximate form of SSA. The range of validity for the acceleration factor in SSA and AA was found to be [1, 2]. In this range, an inverse function connects the acceleration factor to the number of iterations needed to attain prefixed values of FOMs.

  7. Predicting IQ change from brain structure: a cross-validation study.

    PubMed

    Price, C J; Ramsden, S; Hope, T M H; Friston, K J; Seghier, M L

    2013-07-01

    Procedures that can predict cognitive abilities from brain imaging data are potentially relevant to educational assessments and studies of functional anatomy in the developing brain. Our aim in this work was to quantify the degree to which IQ change in the teenage years could be predicted from structural brain changes. Two well-known k-fold cross-validation analyses were applied to data acquired from 33 healthy teenagers - each tested at Time 1 and Time 2 with a 3.5 year interval. One approach, a Leave-One-Out procedure, predicted IQ change for each subject on the basis of structural change in a brain region that was identified from all other subjects (i.e., independent data). This approach predicted 53% of verbal IQ change and 14% of performance IQ change. The other approach used half the sample, to identify regions for predicting IQ change in the other half (i.e., a Split half approach); however--unlike the Leave-One-Out procedure--regions identified using half the sample were not significant. We discuss how these out-of-sample estimates compare to in-sample estimates; and draw some recommendations for k-fold cross-validation procedures when dealing with small datasets that are typical in the neuroimaging literature.

  8. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2017-03-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  9. Variational cross-validation of slow dynamical modes in molecular kinetics

    PubMed Central

    Pande, Vijay S.

    2015-01-01

    Markov state models are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long-timescale statistical kinetics and slow dynamical modes of biomolecular systems. However, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non-experts applying these methods to novel biological systems. Here, we consider cross-validation with a new objective function for estimators of these slow dynamical modes, a generalized matrix Rayleigh quotient (GMRQ), which measures the ability of a rank-m projection operator to capture the slow subspace of the system. It is shown that a variational theorem bounds the GMRQ from above by the sum of the first m eigenvalues of the system’s propagator, but that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. This overfitting can be detected and avoided through cross-validation. These result make it possible to construct Markov state models for protein dynamics in a way that appropriately captures the tradeoff between systematic and statistical errors. PMID:25833563

  10. Assessing the performance of spectroscopic models for cancer diagnostics using cross-validation and permutation testing

    NASA Astrophysics Data System (ADS)

    Lloyd, G. R.; Hutchings, J.; Almond, L. M.; Barr, H.; Kendall, C.; Stone, N.

    2012-01-01

    Multivariate classifiers (such as Linear Discriminant Analysis, Support Vector Machines etc) are known to be useful tools for making diagnostic decisions based on spectroscopic data. However, robust techniques for assessing their performance (e.g. by sensitivity and specificity) are vital if the application of these methods is to be successful in the clinic. In this work the application of repeated cross-validation for estimating confidence intervals for sensitivity and specificity of multivariate classifiers is presented. Furthermore, permutation testing is presented as a suitable technique for estimating the probability of obtaining the observed sensitivity and specificity by chance. Both approaches are demonstrated through their application to a Raman spectroscopic model of gastrointestinal cancer.

  11. Test, revision, and cross-validation of the Physical Activity Self-Definition Model.

    PubMed

    Kendzierski, Deborah; Morganstein, Mara S

    2009-08-01

    Structural equation modeling was used to test an extended version of the Kendzierski, Furr, and Schiavoni (1998) Physical Activity Self-Definition Model. A revised model using data from 622 runners fit the data well. Cross-validation indices supported the revised model, and this model also provided a good fit to data from 397 cyclists. Partial invariance was found across activities. In both samples, perceived commitment and perceived ability had direct effects on self-definition, and perceived wanting, perceived trying, and enjoyment had indirect effects. The contribution of perceived ability to self-definition did not differ across activities. Implications concerning the original model, indirect effects, skill salience, and the role of context in self-definition are discussed.

  12. Cross Validation for Selection of Cortical Interaction Models From Scalp EEG or MEG

    PubMed Central

    Cheung, Bing Leung Patrick; Nowak, Robert; Lee, Hyong Chol; van Drongelen, Wim; Van Veen, Barry D.

    2012-01-01

    A cross-validation (CV) method based on state-space framework is introduced for comparing the fidelity of different cortical interaction models to the measured scalp electroencephalogram (EEG) or magnetoencephalography (MEG) data being modeled. A state equation models the cortical interaction dynamics and an observation equation represents the scalp measurement of cortical activity and noise. The measured data are partitioned into training and test sets. The training set is used to estimate model parameters and the model quality is evaluated by computing test data innovations for the estimated model. Two CV metrics normalized mean square error and log-likelihood are estimated by averaging over different training/test partitions of the data. The effectiveness of this method of model selection is illustrated by comparing two linear modeling methods and two nonlinear modeling methods on simulated EEG data derived using both known dynamic systems and measured electrocorticography data from an epilepsy patient. PMID:22084038

  13. Cross-validation of the 20- versus 30-s Wingate anaerobic test.

    PubMed

    Laurent, C Matthew; Meyers, Michael C; Robinson, Clay A; Green, J Matt

    2007-08-01

    The 30-s Wingate anaerobic test (30-WAT) is the most widely accepted protocol for measuring anaerobic response, despite documented physical side effects. Abbreviation of the 30-WAT without loss of data could enhance subject compliance while maintaining test applicability. The intent of this study was to quantify the validity of the 20-s Wingate anaerobic test (20-WAT) versus the traditional 30-WAT. Fifty males (mean +/- SEM; age = 20.5 +/- 0.3 years; Ht = 1.6 +/- 0.01 m; Wt = 75.5 +/- 2.6 kg) were randomly selected to either a validation (N = 35) or cross-validation group (N = 15) and completed a 20-WAT and 30-WAT in double blind, random order on separate days to determine peak power (PP; W kg(-1)), mean power (MP; W kg(-1)), and fatigue index (FI; %). Utilizing power outputs (relative to body mass) recorded during each second of both protocols, a non-linear regression equation (Y (20WAT+10 )= 31.4697 e(-0.5)[ln(X (second)/1174.3961)/2.6369(2)]; r (2) = 0.97; SEE = 0.56 W kg(-1)) successfully predicted (error approximately 10%) the final 10 s of power outputs in the cross-validation population. There were no significant differences between MP and FI between the 20-WAT that included the predicted 10 s of power outputs (20-WAT+10) and the 30-WAT. When derived data were subjected to Bland-Altman analyses, the majority of plots (93%) fell within the limits of agreement (+/-2SD). Therefore, when compared to the 30-WAT, the 20-WAT may be considered a valid alternative when used with the predictive non-linear regression equation to derive the final power output values.

  14. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  15. Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730

  16. 8x8 and 10x10 Hyperspace Representations of SU(3) and 10-fold Point-Symmetry Group of Quasicrystals

    NASA Astrophysics Data System (ADS)

    Animalu, Alexander

    2012-02-01

    In order to further elucidate the unexpected 10-fold point-symmetry group structure of quasi-crystals for which the 2011 Nobel Prize in chemistry was awarded to Daniel Shechtman, we explore a correspondence principle between the number of (projective) geometric elements (points[vertices] + lines[edges] + planes[faces]) of primitive cells of periodic or quasi-periodic arrangement of hard or deformable spheres in 3-dimensional space of crystallography and elements of quantum field theory of particle physics [points ( particles, lines ( particles, planes ( currents] and hence construct 8x8 =64 = 28+36 = 26 + 38, and 10x10 =100= 64 + 36 = 74 + 26 hyperspace representations of the SU(3) symmetry of elementary particle physics and quasicrystals of condensed matter (solid state) physics respectively, As a result, we predict the Cabibbo-like angles in leptonic decay of hadrons in elementary-particle physics and the observed 10-fold symmetric diffraction pattern of quasi-crystals.

  17. Cross-validation and hypothesis testing in neuroimaging: an irenic comment on the exchange between Friston and Lindquist et al

    PubMed Central

    Reiss, Philip T.

    2016-01-01

    The “ten ironic rules for statistical reviewers” presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs. PMID:25918034

  18. A leave-one-out cross-validation SAS macro for the identification of markers associated with survival.

    PubMed

    Rushing, Christel; Bulusu, Anuradha; Hurwitz, Herbert I; Nixon, Andrew B; Pang, Herbert

    2015-02-01

    A proper internal validation is necessary for the development of a reliable and reproducible prognostic model for external validation. Variable selection is an important step for building prognostic models. However, not many existing approaches couple the ability to specify the number of covariates in the model with a cross-validation algorithm. We describe a user-friendly SAS macro that implements a score selection method and a leave-one-out cross-validation approach. We discuss the method and applications behind this algorithm, as well as details of the SAS macro.

  19. Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution.

    PubMed

    Wang, Yu; Li, Jihong

    2016-08-01

    In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the

  20. Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

    PubMed

    Reiss, Philip T

    2015-08-01

    The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs.

  1. Cross-validation of recent and longstanding resting metabolic rate prediction equations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...

  2. Characterization of Small Focal Renal Lesions: Diagnostic Accuracy with Single-Phase Contrast-enhanced Dual-Energy CT with Material Attenuation Analysis Compared with Conventional Attenuation Measurements.

    PubMed

    Marin, Daniele; Davis, Drew; Roy Choudhury, Kingshuk; Patel, Bhavik; Gupta, Rajan T; Mileto, Achille; Nelson, Rendon C

    2017-03-28

    Purpose To determine whether single-phase contrast material-enhanced dual-energy material attenuation analysis improves the characterization of small (1-4 cm) renal lesions compared with conventional attenuation measurements by using histopathologic analysis and follow-up imaging as the clinical reference standards. Materials and Methods In this retrospective, HIPAA-compliant, institutional review board-approved study, 136 consecutive patients (95 men and 41 women; mean age, 54 years) with 144 renal lesions (111 benign, 33 malignant) measuring 1-4 cm underwent single-energy unenhanced and contrast-enhanced dual-energy computed tomography (CT) of the abdomen. For each renal lesion, attenuation measurements were obtained; attenuation change of greater than or equal to 15 HU was considered evidence of enhancement. Dual-energy attenuation measurements were also obtained by using iodine-water, water-iodine, calcium-water, and water-calcium material basis pairs. Mean lesion attenuation values and material densities were compared between benign and malignant renal lesions by using the two-sample t test. Diagnostic accuracy of attenuation measurements and dual-energy material densities was assessed and validated by using 10-fold cross-validation to limit the effect of optimistic bias. Results By using cross-validated optimal thresholds at 100% sensitivity, iodine-water material attenuation images significantly improved specificity for differentiating between benign and malignant renal lesions compared with conventional enhancement measurements (93% [103 of 111]; 95% confidence interval: 86%, 97%; vs 81% [90 of 111]; 95% confidence interval: 73%, 88%) (P = .02). Sensitivity with iodine-water and calcium-water material attenuation images was also higher than that with conventional enhancement measurements, although the difference was not statistically significant. Conclusion Contrast-enhanced dual-energy CT with material attenuation analysis improves specificity for

  3. A Cross-Validation of easyCBM[R] Mathematics Cut Scores in Oregon: 2009-2010. Technical Report #1104

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in Oregon. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Oregon state test was used as the…

  4. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  5. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  6. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  7. Cross Validation Through Two-dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor; Tay, Keng; Romano, Walter; Li, Shuo

    2016-06-08

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  8. The generalized cross-validation method applied to geophysical linear traveltime tomography

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Oliveira, N. P.

    2009-12-01

    The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.

  9. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  10. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  11. The computation of generalized cross-validation functions through householder tridiagonalization with applications to the fitting of interaction spline models

    NASA Technical Reports Server (NTRS)

    Gu, Chong; Bates, Douglas M.; Chen, Zehua; Wahba, Grace

    1989-01-01

    An efficient algorithm for computing the generalized cross-validation function for the general cross-validated regularization/smoothing problem is provided. This algorithm is appropriate for problems where no natural structure is available, and the regularization/smoothing problem is solved (exactly) in a reproducing kernel Hilbert space. It is particularly appropriate for certain multivariate smoothing problems with irregularly spaced data, and certain remote sensing problems, such as those that occur in meteorology, where the sensors are arranged irregularly. The algorithm is applied to the fitting of interaction spline models with irregularly spaced data and two smoothing parameters; favorable timing results are presented. The algorithm may be extended to the computation of certain generalized maximum likelihood (GML) functions. Application of the GML algorithm to a problem in numerical weather forecasting, and to a broad class of hypothesis testing problems, is noted.

  12. Calibration and Cross-Validation of the ActiGraph wGT3X+ Accelerometer for the Estimation of Physical Activity Intensity in Children with Intellectual Disabilities

    PubMed Central

    McGarty, Arlene M.; Penpraze, Victoria; Melville, Craig A.

    2016-01-01

    Background Valid objective measurement is integral to increasing our understanding of physical activity and sedentary behaviours. However, no population-specific cut points have been calibrated for children with intellectual disabilities. Therefore, this study aimed to calibrate and cross-validate the first population-specific accelerometer intensity cut points for children with intellectual disabilities. Methods Fifty children with intellectual disabilities were randomly assigned to the calibration (n = 36; boys = 28, 9.53±1.08yrs) or cross-validation (n = 14; boys = 9, 9.57±1.16yrs) group. Participants completed a semi-structured school-based activity session, which included various activities ranging from sedentary to vigorous intensity. Direct observation (SOFIT tool) was used to calibrate the ActiGraph wGT3X+, which participants wore on the right hip. Receiver Operating Characteristic curve analyses determined the optimal cut points for sedentary, moderate, and vigorous intensity activity for the vertical axis and vector magnitude. Classification agreement was investigated using sensitivity, specificity, total agreement, and Cohen’s kappa scores against the criterion measure of SOFIT. Results The optimal (AUC = .87−.94) vertical axis cut points (cpm) were ≤507 (sedentary), 1008−2300 (moderate), and ≥2301 (vigorous), which demonstrated high sensitivity (81−88%) and specificity (81−85%). The optimal (AUC = .86−.92) vector magnitude cut points (cpm) of ≤1863 (sedentary), 2610−4214 (moderate), and ≥4215 (vigorous) demonstrated comparable, albeit marginally lower, accuracy than the vertical axis cut points (sensitivity = 80−86%; specificity = 77−82%). Classification agreement ranged from moderate to almost perfect (κ = .51−.85) with high sensitivity and specificity, and confirmed the trend that accuracy increased with intensity, and vertical axis cut points provide higher classification agreement than vector magnitude cut points

  13. Predicting Chinese Children and Youth's Energy Expenditure Using ActiGraph Accelerometers: A Calibration and Cross-Validation Study

    ERIC Educational Resources Information Center

    Zhu, Zheng; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…

  14. The female sexual function index (FSFI): cross-validation and development of clinical cutoff scores.

    PubMed

    Wiegel, Markus; Meston, Cindy; Rosen, Raymond

    2005-01-01

    , independently. Discriminant validity testing confirmed the ability of both total and domain scores to differentiate between functional and nondysfunctional women. On the basis of sensitivity and specificity analyses and the CART procedure, we found an FSFI total score of 26.55 to be the optimal cut score for differentiating women with and without sexual dysfunction. On the basis of this cut-off we found 70.7% of women with sexual dysfunction and 88.1% of the sexually functional women in the cross-validation sample to be correctly classified. Addition of the lubrication score in the model resulted in slightly improved specificity (from .707 to .772) at a slight cost of sensitivity (from .881 to .854) for identifying women without sexual dysfunction. We discuss the results in terms of potential strengths and weaknesses of the FSFI, as well in terms of further clinical and research implications.

  15. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  16. Quantification of rainfall prediction uncertainties using a cross-validation based technique. Methodology description and experimental validation.

    NASA Astrophysics Data System (ADS)

    Fraga, Ignacio; Cea, Luis; Puertas, Jerónimo; Salsón, Santiago; Petazzi, Alberto

    2016-04-01

    In this paper we present a new methodology to compute rainfall fields including the quantification of predictions uncertainties using raingauge network data. The proposed methodology comprises two steps. Firstly, the ordinary krigging technique is used to determine the estimated rainfall depth in every point of the study area. Then multiple equi-probable errors fields, which comprise both interpolation and measuring uncertainties, are added to the krigged field resulting in multiple rainfall predictions. To compute these error fields first the standard deviation of the krigging estimation is determined following the cross-validation based procedure described in Delrieu et al. (2014). Then, the standard deviation field is sampled using non-conditioned Gaussian random fields. The proposed methodology was applied to study 7 rain events in a 60x60 km area of the west coast of Galicia, in the Northwest of Spain. Due to its location at the junction between tropical and polar regions, the study area suffers from frequent intense rainfalls characterized by a great variability in terms of both space and time. Rainfall data from the tipping bucket raingauge network operated by MeteoGalicia were used to estimate the rainfall fields using the proposed methodology. The obtained predictions were then validated using rainfall data from 3 additional rain gauges installed within the CAPRI project (Probabilistic flood prediction with high resolution hydrologic models from radar rainfall estimates, funded by the Spanish Ministry of Economy and Competitiveness. Reference CGL2013-46245-R.). Results show that both the mean hyetographs and the peak intensities are correctly predicted. The computed hyetographs present a good fit to the experimental data and most of the measured values fall within the 95% confidence intervals. Also, most of the experimental values outside the confidence bounds correspond to time periods of low rainfall depths, where the inaccuracy of the measuring devices

  17. Embedded Performance Validity Measures with Postdeployment Veterans: Cross-Validation and Efficiency with Multiple Measures.

    PubMed

    Shura, Robert D; Miskey, Holly M; Rowland, Jared A; Yoash-Gantz, Ruth E; Denning, John H

    2016-01-01

    Embedded validity measures support comprehensive assessment of performance validity. The purpose of this study was to evaluate the accuracy of individual embedded measures and to reduce them to the most efficient combination. The sample included 212 postdeployment veterans (average age = 35 years, average education = 14 years). Thirty embedded measures were initially identified as predictors of Green's Word Memory Test (WMT) and were derived from the California Verbal Learning Test-Second Edition (CVLT-II), Conners' Continuous Performance Test-Second Edition (CPT-II), Trail Making Test, Stroop, Wisconsin Card Sorting Test-64, the Wechsler Adult Intelligence Scale-Third Edition Letter-Number Sequencing, Rey Complex Figure Test (RCFT), Brief Visuospatial Memory Test-Revised, and the Finger Tapping Test. Eight nonoverlapping measures with the highest area-under-the-curve (AUC) values were retained for entry into a logistic regression analysis. Embedded measure accuracy was also compared to cutoffs found in the existing literature. Twenty-one percent of the sample failed the WMT. Previously developed cutoffs for individual measures showed poor sensitivity (SN) in the current sample except for the CPT-II (Total Errors, SN = .41). The CVLT-II (Trials 1-5 Total) showed the best overall accuracy (AUC = .80). After redundant measures were statistically eliminated, the model included the RCFT (Recognition True Positives), CPT-II (Total Errors), and CVLT-II (Trials 1-5 Total) and increased overall accuracy compared with the CVLT-II alone (AUC = .87). The combination of just 3 measures from the CPT-II, CVLT-II, and RCFT was the most accurate/efficient in predicting WMT performance.

  18. Use of n-fold cross-validation to evaluate three methods to calculate heavy truck annual average daily traffic and vehicle miles traveled.

    PubMed

    Hallmark, Shauna L; Souleyrette, Reginald; Lamptey, Stephen

    2007-01-01

    Reliable estimates of heavy-truck volumes in the United States are important in a number of transportation applications including pavement design and management, traffic safety, and traffic operations. Additionally, because heavy vehicles emit pollutants at much higher rates than passenger vehicles, reliable volume estimates are critical to computing accurate inventories of on-road emissions. Accurate baseline inventories are also necessary to forecast future scenarios. The research presented in this paper evaluated three different methods commonly used by transportation agencies to estimate annual average daily traffic (AADT), which is used to determine vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa Department of Transportation were used to estimate AADT for single-unit and multiunit trucks for rural freeways and rural primary highways using the three methods. The first method developed general expansion factors, which apply to all vehicles. AADT, representing all vehicles, was estimated for short-term counts and was multiplied by statewide average truck volumes for the corresponding roadway type to obtain AADT for each truck category. The second method also developed general expansion factors and AADT estimates. Truck AADT for the second method was calculated by multiplying the general AADT by truck volumes from the short-term counts. The third method developed expansion factors specific to each truck group. AADT estimates for each truck group were estimated from short-term counts using corresponding expansion factors. Accuracy of the three methods was determined by comparing actual AADT from count station data to estimates from the three methods. Accuracy of the three methods was compared using n-fold cross-validation. Mean squared error of prediction was used to estimate the difference between estimated and actual AADT. Prediction error was lowest for the method that developed separate expansion factors for trucks

  19. Accuracy of genomic selection in European maize elite breeding populations.

    PubMed

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  20. Boosted leave-many-out cross-validation: the effect of training and test set diversity on PLS statistics.

    PubMed

    Clark, Robert D

    2003-01-01

    It is becoming increasingly common in quantitative structure/activity relationship (QSAR) analyses to use external test sets to evaluate the likely stability and predictivity of the models obtained. In some cases, such as those involving variable selection, an internal test set--i.e., a cross-validation set--is also used. Care is sometimes taken to ensure that the subsets used exhibit response and/or property distributions similar to those of the data set as a whole, but more often the individual observations are simply assigned 'at random.' In the special case of MLR without variable selection, it can be analytically demonstrated that this strategy is inferior to others. Most particularly, D-optimal design performs better if the form of the regression equation is known and the variables involved are well behaved. This report introduces an alternative, non-parametric approach termed 'boosted leave-many-out' (boosted LMO) cross-validation. In this method, relatively small training sets are chosen by applying optimizable k-dissimilarity selection (OptiSim) using a small subsample size (k = 4, in this case), with the unselected observations being reserved as a test set for the corresponding reduced model. Predictive errors for the full model are then estimated by aggregating results over several such analyses. The countervailing effects of training and test set size, diversity, and representativeness on PLS model statistics are described for CoMFA analysis of a large data set of COX2 inhibitors.

  1. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  2. Cross-validation of the reduced form of the Food Craving Questionnaire-Trait using confirmatory factor analysis

    PubMed Central

    Iani, Luca; Barbaranelli, Claudio; Lombardo, Caterina

    2015-01-01

    Objective: The Food Craving Questionnaire-Trait (FCQ-T) is commonly used to assess habitual food cravings among individuals. Previous studies have shown that a brief version of this instrument (FCQ-T-r) has good reliability and validity. This article is the first to use Confirmatory factor analysis to examine the psychometric properties of the FCQ-T-r in a cross-validation study. Method: Habitual food cravings, as well as emotion regulation strategies, affective states, and disordered eating behaviors, were investigated in two independent samples of non-clinical adult volunteers (Sample 1: N = 368; Sample 2: N = 246). Confirmatory factor analyses were conducted to simultaneously test model fit statistics and dimensionality of the instrument. FCQ-T-r reliability was assessed by computing the composite reliability coefficient. Results: Analysis supported the unidimensional structure of the scale and fit indices were acceptable for both samples. The FCQ-T-r showed excellent reliability and moderate to high correlations with negative affect and disordered eating. Conclusion: Our results indicate that the FCQ-T-r scores can be reliably used to assess habitual cravings in an Italian non-clinical sample of adults. The robustness of these results is tested by a cross-validation of the model using two independent samples. Further research is required to expand on these findings, particularly in children and adolescents. PMID:25918510

  3. Fear factors: cross validation of specific phobia domains in a community-based sample of African American adults.

    PubMed

    Chapman, L Kevin; Vines, Lauren; Petrie, Jenny

    2011-05-01

    The current study attempted a cross-validation of specific phobia domains in a community-based sample of African American adults based on a previous model of phobia domains in a college student sample of African Americans. Subjects were 100 African American community-dwelling adults who completed the Fear Survey Schedule-Second Edition (FSS-II). Domains of fear were created using a similar procedure as the original, college sample of African American adults. A model including all of the phobia domains from the FSS-II was initially tested and resulted in poor model fit. Cross-validation was subsequently attempted through examining the original factor pattern of specific phobia domains from the college sample (Chapman, Kertz, Zurlage, & Woodruff-Borden, 2008). Data from the current, community based sample of African American adults provided poor fit to this model. The trimmed model for the current sample included the animal and social anxiety factors as in the original model. The natural environment-type specific phobia factor did not provide adequate fit for the community-based sample of African Americans. Results indicated that although different factor loading patterns of fear may exist among community-based African Americans as compared to African American college students, both animal and social fears are nearly identical in both groups, indicating a possible cultural homogeneity for phobias in African Americans. Potential explanations of these findings and future directions are discussed.

  4. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey.

    PubMed

    Link, William A; Sauer, John R

    2016-07-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  5. Revealing Latent Value of Clinically Acquired CTs of Traumatic Brain Injury Through Multi-Atlas Segmentation in a Retrospective Study of 1,003 with External Cross-Validation.

    PubMed

    Plassard, Andrew J; Kelly, Patrick D; Asman, Andrew J; Kang, Hakmook; Patel, Mayur B; Landman, Bennett A

    2015-03-20

    Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold cross-validation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R(2) to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.

  6. Cross-validation of a portable, six-degree-of-freedom load cell for use in lower-limb prosthetics research.

    PubMed

    Koehler, Sara R; Dhaher, Yasin Y; Hansen, Andrew H

    2014-04-11

    The iPecs load cell is a lightweight, six-degree-of-freedom force transducer designed to fit easily into an endoskeletal prosthesis via a universal mounting interface. Unlike earlier tethered systems, it is capable of wireless data transmission and on-board memory storage, which facilitate its use in both clinical and real-world settings. To date, however, the validity of the iPecs load cell has not been rigorously established, particularly for loading conditions that represent typical prosthesis use. The aim of this study was to assess the accuracy of an iPecs load cell during in situ human subject testing by cross-validating its force and moment measurements with those of a typical gait analysis laboratory. Specifically, the gait mechanics of a single person with transtibial amputation were simultaneously measured using an iPecs load cell, multiple floor-mounted force platforms, and a three-dimensional motion capture system. Overall, the forces and moments measured by the iPecs were highly correlated with those measured by the gait analysis laboratory (r>0.86) and RMSEs were less than 3.4% and 5.2% full scale output across all force and moment channels, respectively. Despite this favorable comparison, however, the results of a sensitivity analysis suggest that care should be taken to accurately identify the axes and instrumentation center of the load cell in situations where iPecs data will be interpreted in a coordinate system other than its own (e.g., inverse dynamics analysis).

  7. A cross-validation procedure for stopping the EM algorithm and deconvolution of neutron depth profiling spectra

    SciTech Connect

    Coakley, K.J. )

    1991-02-01

    The iterative EM algorithm is used to deconvolve neutron depth profiling spectra. Because of statistical noise in the data, artifacts in the estimated particle emission rate profile appear after too many iterations of the EM algorithm. To avoid artifacts, the EM algorithm is stopped using a cross-validation procedure. The data are split into two independent halves. The EM algorithm is applied to one half of the data to get an estimate of the emission rates. The algorithm is stopped when the conditional likelihood of the other half of the data passes through its maximum. The roles of the two halves of the data are then switched to get a second estimate of the emission rates. The two estimates are then averaged.

  8. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms—A Nonlinear Statistical Analysis with Cross Validation

    PubMed Central

    Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate

  9. Simulating California Reservoir Operation Using the Classification and Regression Tree Algorithm Combined with a Shuffled Cross-Validation Scheme

    NASA Astrophysics Data System (ADS)

    Yang, T.; Gao, X.; Sorooshian, S.; Li, X.

    2015-12-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs, and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as the consideration of policy and regulation, environmental constraints, dry/wet conditions, etc. In this paper, a reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve model's predictive performance. An application study of 9 major reservoirs in California is carried out and the simulated results from different decision tree approaches are compared with observation, including original CART and Random Forest. The statistical measurements show that CART combined with the shuffled cross-validation scheme gives a better predictive performance over the other two methods, especially in simulating the peak flows. The results for simulated controlled outflow, storage changes and storage trajectories also show that the proposed model is able to consistently and reasonably predict the human's reservoir operation decisions. In addition, we found that the operation in the Trinity Lake, Oroville Lake and Shasta Lake are greatly influenced by policy and regulation, while low elevation reservoirs are more sensitive to inflow amount than others.

  10. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  11. Methane Cross-Validation Between Spaceborne Solar Occultation Observations from ACE-FTS, Spaceborne Nadir Sounding from Gosat, and Ground-Based Solar Absorption Measurements, at a High Arctic Site.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S. A.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2014-12-01

    We present cross-validation of remote sensing observations of methane profiles in the Canadian High Arctic. Methane is the third most important greenhouse gas on Earth, and second only to carbon dioxide in its contribution to anthropogenic global warming. Accurate and precise observations of methane are essential to understand quantitatively its role in the climate system and in global change. The Arctic is a particular region of concern, as melting permafrost and disappearing sea ice might lead to accelerated release of methane into the atmosphere. Global observations require spaceborne instruments, in particular in the Arctic, where surface measurements are sparse and expensive to perform. Satellite-based remote sensing is an underconstrained problem, and specific validation under Arctic circumstances is required. Here, we show a cross-validation between two spaceborne instruments and ground-based measurements, all Fourier Transform Spectrometers (FTS). We consider the Canadian SCISAT ACE-FTS, a solar occultation spectrometer operating since 2004, and the Japanese GOSAT TANSO-FTS, a nadir-pointing FTS operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environmental and Atmospheric Research Laboratory (PEARL) at Eureka, Nunavut (80°N, 86°W) since 2006. Measurements are collocated considering temporal, spatial, and geophysical criteria and regridded to a common vertical grid. We perform smoothing on the higher-resolution instrument results to account for different vertical resolutions. Then, profiles of differences for each pair of instruments are examined. Any bias between instruments, or any accuracy that is worse than expected, needs to be understood prior to using the data. The results of the study will serve as a guideline on how to use the vertically resolved methane products from ACE and

  12. Cross-validation of generalised body composition equations with diverse young men and women: the Training Intervention and Genetics of Exercise Response (TIGER) Study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...

  13. Cross-Validation of a Recently Published Equation Predicting Energy Expenditure to Run or Walk a Mile in Normal-Weight and Overweight Adults

    ERIC Educational Resources Information Center

    Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark

    2014-01-01

    An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…

  14. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  15. Cross-validation and discriminant validity of Adolescent Health Promotion Scale among overweight and nonoverweight adolescents in Taiwan.

    PubMed

    Chen, Mei-Yen; Wang, Edward K; Chang, Chee-Jen

    2006-01-01

    This study used cross-validation and discriminant analysis to evaluate the construct and discriminant validity of Adolescent Health Promotion (AHP) scale between the overweight and nonoverweight adolescents in Taiwan. A cross-sectional survey method was used and 660 adolescents participated in this study. Cluster and discriminant analyses were used to analyze the data. Our findings indicate that the AHP is a valid and reliable scale to discriminate between the health-promoting behaviors of overweight and nonoverweight adolescents. For the total scale, cluster analyses revealed two distinct patterns, which we designated the healthy and unhealthy groups. Discriminate analysis supported this clustering as having good discriminant validity, as nonoverweight adolescents tended to be classified as healthy, while the overweight tended to be in the unhealthy group. In general, overweight adolescents practiced health-related behaviors at a significantly lower frequency than the nonoverweight. These included exercise behavior, stress management, life appreciation, health responsibility, and social support. These findings can be used to further develop and refine knowledge of adolescent overweight and related strategies for intervention.

  16. Simulating California reservoir operation using the classification and regression-tree algorithm combined with a shuffled cross-validation scheme

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Gao, Xiaogang; Sorooshian, Soroosh; Li, Xin

    2016-03-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as water delivery requirement, environmental constraints, dry/wet conditions, etc. In this paper, a robust reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve CART's predictive performance. An application study of nine major reservoirs in California is carried out. Results produced by the enhanced CART, original CART, and random forest are compared with observation. The statistical measurements show that the enhanced CART and random forest overperform the CART control run in general, and the enhanced CART algorithm gives a better predictive performance over random forest in simulating the peak flows. The results also show that the proposed model is able to consistently and reasonably predict the expert release decisions. Experiments indicate that the release operation in the Oroville Lake is significantly dominated by SWP allocation amount and reservoirs with low elevation are more sensitive to inflow amount than others.

  17. An Efficient Leave-One-Out Cross-Validation-Based Extreme Learning Machine (ELOO-ELM) With Minimal User Intervention.

    PubMed

    Shao, Zhifei; Er, Meng Joo; Wang, Ning

    2016-08-01

    It is well known that the architecture of the extreme learning machine (ELM) significantly affects its performance and how to determine a suitable set of hidden neurons is recognized as a key issue to some extent. The leave-one-out cross-validation (LOO-CV) is usually used to select a model with good generalization performance among potential candidates. The primary reason for using the LOO-CV is that it is unbiased and reliable as long as similar distribution exists in the training and testing data. However, the LOO-CV has rarely been implemented in practice because of its notorious slow execution speed. In this paper, an efficient LOO-CV formula and an efficient LOO-CV-based ELM (ELOO-ELM) algorithm are proposed. The proposed ELOO-ELM algorithm can achieve fast learning speed similar to the original ELM without compromising the reliability feature of the LOO-CV. Furthermore, minimal user intervention is required for the ELOO-ELM, thus it can be easily adopted by nonexperts and implemented in automation processes. Experimentation studies on benchmark datasets demonstrate that the proposed ELOO-ELM algorithm can achieve good generalization with limited user intervention while retaining the efficiency feature.

  18. Crossing the North Sea seems to make DCD disappear: cross-validation of Movement Assessment Battery for Children-2 norms.

    PubMed

    Niemeijer, Anuschka S; van Waelvelde, Hilde; Smits-Engelsman, Bouwien C M

    2015-02-01

    The Movement Assessment Battery for Children has been revised as the Movement ABC-2 (Henderson, Sugden, & Barnett, 2007). In Europe, the 15th percentile score on this test is recommended for one of the DSM-IV diagnostic criteria for Developmental Coordination Disorder (DCD). A representative sample of Dutch and Flemish children was tested to cross-validate the UK standard scores, including the 15th percentile score. First, the mean, SD and percentile scores of Dutch children were compared to those of UK normative samples. Item standard scores of Dutch speaking children deviated from the UK reference values suggesting necessary adjustments. Except for very young children, the Dutch-speaking samples performed better. Second, based on the mean and SD and clinical relevant cut-off scores (5th and 15th percentile), norms were adjusted for the Dutch population. For diagnostic use, researchers and clinicians should use the reference norms that are valid for the group of children they are testing. The results indicate that there possibly is an effect of testing procedure in other countries that validated the UK norms and/or cultural influence on the age norms of the Movement ABC-2. It is suggested to formulate criterion-based norms for age groups in addition to statistical norms.

  19. Comparison, cross-validation and consolidation of the results from two different geodynamic projects working in the eastern Carpathians

    NASA Astrophysics Data System (ADS)

    Ambrosious, B.; Dinter, G.; van der Hoeven, A.; Mocanu, V.; Nutto, M.; Schmitt, G.; Spakman, W.

    2003-04-01

    Since 1995 several projects/programmes are working in the Vrancea-region in Romania with partly different intensions. First of all, the CERGOP project installed the CEGRN-network and performed GPS-measurements ('95,'96,'97,'99,'01), mainly to realise a geodetic reference frame for local geodynamic projects. In the framework of the Collaborative Research Center CRC461 "Strong Earthquakes" the Geodetic Institute of the University Karlsruhe (GIK) Densified the network up to 35 stations and carried out three GPS-campaigns ('97, '98 and '00). First results of this project were presented at the EGS-meeting 2001 in Nice. In 2002 a new geodynamic research project was initiated at the Delft Institute of Earth-Oriented Space Research (DEOS). In the context of this project, 4 permanent stations and 10 new campaign stations were installed, which leads to a common network of about 50 stations. In tight cooperation with the GIK and the University of Bucarest (Departement of Geophysics) the currently last GPS-campaign was successfully carried out in 2002. Now the great challenge and at the same time the great difficulty is a correct combination of all available GPS datasets particularly in consideration of station excentricities and variations of antenna- and receiver-types. Different evalutation strategies and software packages (Bernese-GPS-Software, GIPSY) were used to analyse the GPS data and to estimate the station velocities. Main focus of this joint-presentation is the comparison of the results from the German and Dutch geodynamic projects. The results of the two working groups are cross-validated and finally joined together in a most reasonable solution. Even if three-dimensional analysis is in work, the presentation is limited to the horizontal component.

  20. Diagnostic Accuracy of the Child Behavior Checklist Scales for Attention-Deficit Hyperactivity Disorder: A Receiver-Operating Characteristic Analysis.

    ERIC Educational Resources Information Center

    Chen, Wei J.; And Others

    1994-01-01

    Examined diagnostic accuracy of Child Behavior Checklist (CBCL) scales for attention-deficit hyperactivity disorder (ADHD). Estimated 3 logistic regression models in 121 children with and without ADHD, then tested models in cross-validation sample (n=122) and among 219 siblings of samples. In all four groups, CBCL Attention Problems scale had…

  1. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  2. Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.

    PubMed

    Zhou, Yao; Isabel Vales, M; Wang, Aoxue; Zhang, Zhiwu

    2016-07-19

    Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula.

  3. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    PubMed Central

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or

  4. Carboxylation of cytosine (5caC) in the CG dinucleotide in the E-box motif (CGCAG|GTG) increases binding of the Tcf3|Ascl1 helix-loop-helix heterodimer 10-fold.

    PubMed

    Golla, Jaya Prakash; Zhao, Jianfei; Mann, Ishminder K; Sayeed, Syed K; Mandal, Ajeet; Rose, Robert B; Vinson, Charles

    2014-06-27

    Three oxidative products of 5-methylcytosine (5mC) occur in mammalian genomes. We evaluated if these cytosine modifications in a CG dinucleotide altered DNA binding of four B-HLH homodimers and three heterodimers to the E-Box motif CGCAG|GTG. We examined 25 DNA probes containing all combinations of cytosine in a CG dinucleotide and none changed binding except for carboxylation of cytosine (5caC) in the strand CGCAG|GTG. 5caC enhanced binding of all examined B-HLH homodimers and heterodimers, particularly the Tcf3|Ascl1 heterodimer which increased binding ~10-fold. These results highlight a potential function of the oxidative products of 5mC, changing the DNA binding of sequence-specific transcription factors.

  5. Cross validation of geotechnical and geophysical site characterization methods: near surface data from selected accelerometric stations in Crete (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, C.; Tsangaratos, P.; Rozos, D.; Rondoyianni, Th.; Vafidis, A.; Kritikakis, G.; Steiakakis, M.; Agioutantis, Z.; Savvaidis, A.; Soupios, P.; Papadopoulos, I.; Papadopoulos, N.; Sarris, A.; Mangriotis, M.-D.; Dikmen, U.

    2015-06-01

    The specification of the near surface ground conditions is highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding settlements, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined, through the Thalis ″Geo-Characterization″ project, for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of site characterization data is going to be presented by providing characteristic examples from a total number of thirteen sites. Selected examples present sufficiently the ability, the limitations and the right order of the investigation methods.

  6. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    PubMed

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures.

  7. A self-adaptive genetic algorithm-artificial neural network algorithm with leave-one-out cross validation for descriptor selection in QSAR study.

    PubMed

    Wu, Jingheng; Mei, Juan; Wen, Sixiang; Liao, Siyan; Chen, Jincan; Shen, Yong

    2010-07-30

    Based on the quantitative structure-activity relationships (QSARs) models developed by artificial neural networks (ANNs), genetic algorithm (GA) was used in the variable-selection approach with molecule descriptors and helped to improve the back-propagation training algorithm as well. The cross validation techniques of leave-one-out investigated the validity of the generated ANN model and preferable variable combinations derived in the GAs. A self-adaptive GA-ANN model was successfully established by using a new estimate function for avoiding over-fitting phenomenon in ANN training. Compared with the variables selected in two recent QSAR studies that were based on stepwise multiple linear regression (MLR) models, the variables selected in self-adaptive GA-ANN model are superior in constructing ANN model, as they revealed a higher cross validation (CV) coefficient (Q(2)) and a lower root mean square deviation both in the established model and biological activity prediction. The introduced methods for validation, including leave-multiple-out, Y-randomization, and external validation, proved the superiority of the established GA-ANN models over MLR models in both stability and predictive power. Self-adaptive GA-ANN showed us a prospect of improving QSAR model.

  8. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  9. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography.

  10. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law

    PubMed Central

    Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen

    2015-01-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556

  11. Electrochemical Performance and Stability of the Cathode for Solid Oxide Fuel Cells. I. Cross Validation of Polarization Measurements by Impedance Spectroscopy and Current-Potential Sweep

    SciTech Connect

    Zhou, Xiao Dong; Pederson, Larry R.; Templeton, Jared W.; Stevenson, Jeffry W.

    2009-12-09

    The aim of this paper is to address three issues in solid oxide fuel cells: (1) cross-validation of the polarization of a single cell measured using both dc and ac approaches, (2) the precise determination of the total areal specific resistance (ASR), and (3) understanding cathode polarization with LSCF cathodes. The ASR of a solid oxide fuel cell is a dynamic property, meaning that it changes with current density. The ASR measured using ac impedance spectroscopy (low frequency interception with real Z´ axis of ac impedance spectrum) matches with that measured from a dc IV sweep (the tangent of dc i-V curve). Due to the dynamic nature of ASR, we found that an ac impedance spectrum measured under open circuit voltage or on a half cell may not represent cathode performance under real operating conditions, particularly at high current density. In this work, the electrode polarization was governed by the cathode activation polarization; the anode contribution was negligible.

  12. SILAC-Pulse Proteolysis: A Mass Spectrometry-Based Method for Discovery and Cross-Validation in Proteome-Wide Studies of Ligand Binding

    NASA Astrophysics Data System (ADS)

    Adhikari, Jagat; Fitzgerald, Michael C.

    2014-12-01

    Reported here is the use of stable isotope labeling with amino acids in cell culture (SILAC) and pulse proteolysis (PP) for detection and quantitation of protein-ligand binding interactions on the proteomic scale. The incorporation of SILAC into PP enables the PP technique to be used for the unbiased detection and quantitation of protein-ligand binding interactions in complex biological mixtures (e.g., cell lysates) without the need for prefractionation. The SILAC-PP technique is demonstrated in two proof-of-principle experiments using proteins in a yeast cell lysate and two test ligands including a well-characterized drug, cyclosporine A (CsA), and a non-hydrolyzable adenosine triphosphate (ATP) analogue, adenylyl imidodiphosphate (AMP-PNP). The well-known tight-binding interaction between CsA and cyclophilin A was successfully detected and quantified in replicate analyses, and a total of 33 proteins from a yeast cell lysate were found to have AMP-PNP-induced stability changes. In control experiments, the method's false positive rate of protein target discovery was found to be in the range of 2.1% to 3.6%. SILAC-PP and the previously reported stability of protein from rates of oxidation (SPROX) technique both report on the same thermodynamic properties of proteins and protein-ligand complexes. However, they employ different probes and mass spectrometry-based readouts. This creates the opportunity to cross-validate SPROX results with SILAC-PP results, and vice-versa. As part of this work, the SILAC-PP results obtained here were cross-validated with previously reported SPROX results on the same model systems to help differentiate true positives from false positives in the two experiments.

  13. Body fat measurement by bioelectrical impedance and air displacement plethysmography: a cross-validation study to design bioelectrical impedance equations in Mexican adults

    PubMed Central

    Macias, Nayeli; Alemán-Mateo, Heliodoro; Esparza-Romero, Julián; Valencia, Mauro E

    2007-01-01

    Background The study of body composition in specific populations by techniques such as bio-impedance analysis (BIA) requires validation based on standard reference methods. The aim of this study was to develop and cross-validate a predictive equation for bioelectrical impedance using air displacement plethysmography (ADP) as standard method to measure body composition in Mexican adult men and women. Methods This study included 155 male and female subjects from northern Mexico, 20–50 years of age, from low, middle, and upper income levels. Body composition was measured by ADP. Body weight (BW, kg) and height (Ht, cm) were obtained by standard anthropometric techniques. Resistance, R (ohms) and reactance, Xc (ohms) were also measured. A random-split method was used to obtain two samples: one was used to derive the equation by the "all possible regressions" procedure and was cross-validated in the other sample to test predicted versus measured values of fat-free mass (FFM). Results and Discussion The final model was: FFM (kg) = 0.7374 * (Ht2 /R) + 0.1763 * (BW) - 0.1773 * (Age) + 0.1198 * (Xc) - 2.4658. R2 was 0.97; the square root of the mean square error (SRMSE) was 1.99 kg, and the pure error (PE) was 2.96. There was no difference between FFM predicted by the new equation (48.57 ± 10.9 kg) and that measured by ADP (48.43 ± 11.3 kg). The new equation did not differ from the line of identity, had a high R2 and a low SRMSE, and showed no significant bias (0.87 ± 2.84 kg). Conclusion The new bioelectrical impedance equation based on the two-compartment model (2C) was accurate, precise, and free of bias. This equation can be used to assess body composition and nutritional status in populations similar in anthropometric and physical characteristics to this sample. PMID:17697388

  14. Evaluating the accuracy of diffusion MRI models in white matter.

    PubMed

    Rokem, Ariel; Yeatman, Jason D; Pestilli, Franco; Kay, Kendrick N; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking.

  15. Slips of Action and Sequential Decisions: A Cross-Validation Study of Tasks Assessing Habitual and Goal-Directed Action Control.

    PubMed

    Sjoerds, Zsuzsika; Dietrich, Anja; Deserno, Lorenz; de Wit, Sanne; Villringer, Arno; Heinze, Hans-Jochen; Schlagenhauf, Florian; Horstmann, Annette

    2016-01-01

    Instrumental learning and decision-making rely on two parallel systems: a goal-directed and a habitual system. In the past decade, several paradigms have been developed to study these systems in animals and humans by means of e.g., overtraining, devaluation procedures and sequential decision-making. These different paradigms are thought to measure the same constructs, but cross-validation has rarely been investigated. In this study we compared two widely used paradigms that assess aspects of goal-directed and habitual behavior. We correlated parameters from a two-step sequential decision-making task that assesses model-based (MB) and model-free (MF) learning with a slips-of-action paradigm that assesses the ability to suppress cue-triggered, learnt responses when the outcome has been devalued and is therefore no longer desirable. MB control during the two-step task showed a very moderately positive correlation with goal-directed devaluation sensitivity, whereas MF control did not show any associations. Interestingly, parameter estimates of MB and goal-directed behavior in the two tasks were positively correlated with higher-order cognitive measures (e.g., visual short-term memory). These cognitive measures seemed to (at least partly) mediate the association between MB control during sequential decision-making and goal-directed behavior after instructed devaluation. This study provides moderate support for a common framework to describe the propensity towards goal-directed behavior as measured with two frequently used tasks. However, we have to caution that the amount of shared variance between the goal-directed and MB system in both tasks was rather low, suggesting that each task does also pick up distinct aspects of goal-directed behavior. Further investigation of the commonalities and differences between the MF and habit systems as measured with these, and other, tasks is needed. Also, a follow-up cross-validation on the neural systems driving these constructs

  16. Slips of Action and Sequential Decisions: A Cross-Validation Study of Tasks Assessing Habitual and Goal-Directed Action Control

    PubMed Central

    Sjoerds, Zsuzsika; Dietrich, Anja; Deserno, Lorenz; de Wit, Sanne; Villringer, Arno; Heinze, Hans-Jochen; Schlagenhauf, Florian; Horstmann, Annette

    2016-01-01

    Instrumental learning and decision-making rely on two parallel systems: a goal-directed and a habitual system. In the past decade, several paradigms have been developed to study these systems in animals and humans by means of e.g., overtraining, devaluation procedures and sequential decision-making. These different paradigms are thought to measure the same constructs, but cross-validation has rarely been investigated. In this study we compared two widely used paradigms that assess aspects of goal-directed and habitual behavior. We correlated parameters from a two-step sequential decision-making task that assesses model-based (MB) and model-free (MF) learning with a slips-of-action paradigm that assesses the ability to suppress cue-triggered, learnt responses when the outcome has been devalued and is therefore no longer desirable. MB control during the two-step task showed a very moderately positive correlation with goal-directed devaluation sensitivity, whereas MF control did not show any associations. Interestingly, parameter estimates of MB and goal-directed behavior in the two tasks were positively correlated with higher-order cognitive measures (e.g., visual short-term memory). These cognitive measures seemed to (at least partly) mediate the association between MB control during sequential decision-making and goal-directed behavior after instructed devaluation. This study provides moderate support for a common framework to describe the propensity towards goal-directed behavior as measured with two frequently used tasks. However, we have to caution that the amount of shared variance between the goal-directed and MB system in both tasks was rather low, suggesting that each task does also pick up distinct aspects of goal-directed behavior. Further investigation of the commonalities and differences between the MF and habit systems as measured with these, and other, tasks is needed. Also, a follow-up cross-validation on the neural systems driving these constructs

  17. The joint WAIS-III and WMS-III factor structure: development and cross-validation of a six-factor model of cognitive functioning.

    PubMed

    Tulsky, David S; Price, Larry R

    2003-06-01

    During the standardization of the Wechsler Adult Intelligence Scale (3rd ed.; WAIS-III) and the Wechsler Memory Scale (3rd ed.; WMS-III) the participants in the normative study completed both scales. This "co-norming" methodology set the stage for full integration of the 2 tests and the development of an expanded structure of cognitive functioning. Until now, however, the WAIS-III and WMS-III had not been examined together in a factor analytic study. This article presents a series of confirmatory factor analyses to determine the joint WAIS-III and WMS-III factor structure. Using a structural equation modeling approach, a 6-factor model that included verbal, perceptual, processing speed, working memory, auditory memory, and visual memory constructs provided the best model fit to the data. Allowing select subtests to load simultaneously on 2 factors improved model fit and indicated that some subtests are multifaceted. The results were then replicated in a large cross-validation sample (N = 858).

  18. PLS/OPLS models in metabolomics: the impact of permutation of dataset rows on the K-fold cross-validation quality parameters.

    PubMed

    Triba, Mohamed N; Le Moyec, Laurence; Amathieu, Roland; Goossens, Corentine; Bouchemal, Nadia; Nahon, Pierre; Rutledge, Douglas N; Savarin, Philippe

    2015-01-01

    Among all the software packages available for discriminant analyses based on projection to latent structures (PLS-DA) or orthogonal projection to latent structures (OPLS-DA), SIMCA (Umetrics, Umeå Sweden) is the more widely used in the metabolomics field. SIMCA proposes many parameters or tests to assess the quality of the computed model (the number of significant components, R2, Q2, pCV-ANOVA, and the permutation test). Significance thresholds for these parameters are strongly application-dependent. Concerning the Q2 parameter, a significance threshold of 0.5 is generally admitted. However, during the last few years, many PLS-DA/OPLS-DA models built using SIMCA have been published with Q2 values lower than 0.5. The purpose of this opinion note is to point out that, in some circumstances frequently encountered in metabolomics, the values of these parameters strongly depend on the individuals that constitute the validation subsets. As a result of the way in which the software selects members of the calibration and validation subsets, a simple permutation of dataset rows can, in several cases, lead to contradictory conclusions about the significance of the models when a K-fold cross-validation is used. We believe that, when Q2 values lower than 0.5 are obtained, SIMCA users should at least verify that the quality parameters are stable towards permutation of the rows in their dataset.

  19. Partial cross-validation of the Wechsler Memory Scale-Revised (WMS-R) General Memory-Attention/Concentration Malingering Index in a nonlitigating sample.

    PubMed

    Hilsabeck, Robin C; Thompson, Matthew D; Irby, James W; Adams, Russell L; Scott, James G; Gouvier, Wm Drew

    2003-01-01

    The Wechsler Memory Scale-Revised (WMS-R) malingering indices proposed by Mittenberg, Azrin, Millsaps, and Heilbronner [Psychol Assess 5 (1993) 34.] were partially cross-validated in a sample of 200 nonlitigants. Nine diagnostic categories were examined, including participants with traumatic brain injury (TBI), brain tumor, stroke/vascular, senile dementia of the Alzheimer's type (SDAT), epilepsy, depression/anxiety, medical problems, and no diagnosis. Results showed that the discriminant function using WMS-R subtests misclassified only 6.5% of the sample as malingering, with significantly higher misclassification rates of SDAT and stroke/vascular groups. The General Memory Index-Attention/Concentration Index (GMI-ACI) difference score misclassified only 8.5% of the sample as malingering when a difference score of greater than 25 points was used as the cutoff criterion. No diagnostic group was significantly more likely to be misclassified. Results support the utility of the GMI-ACI difference score, as well as the WMS-R subtest discriminant function score, in detecting malingering.

  20. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    PubMed

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  1. Classification of technical pitfalls in objective universal hearing screening by otoacoustic emissions, using an ARMA model of the stimulus waveform and bootstrap cross-validation.

    PubMed

    Vannier, E; Avan, P

    2005-10-01

    Transient-evoked otoacoustic emissions (TEOAE) are widely used for objective hearing screening in neonates. Their main shortcoming is their sensitivity to adverse conditions for sound transmission through the middle-ear, to and from the cochlea. We study here whether a close examination of the stimulus waveform (SW) recorded in the ear canal in the course of a screening test can pinpoint the most frequent middle-ear dysfunctions, thus allowing screeners to avoid misclassifying the corresponding babies as deaf for lack of TEOAE. Three groups of SWs were defined in infants (6-36 months of age) according to middle-ear impairment as assessed by independent testing procedures, and analyzed in the frequency domain where their properties are more readily interpreted than in the time domain. Synthetic SW parameters were extracted with the help of an autoregressive and moving average (ARMA) model, then classified using a maximum likelihood criterion and a bootstrap cross-validation. The best classification performance was 79% with a lower limit (with 90% confidence) of 60%, showing the results' consistency. We therefore suggest that new parameters and methodology based upon a more thorough analysis of SWs can improve the efficiency of TEOAE-based tests by helping the most frequent technical pitfalls to be identified.

  2. Shuffling cross-validation-bee algorithm as a new descriptor selection method for retention studies of pesticides in biopartitioning micellar chromatography.

    PubMed

    Zarei, Kobra; Atabati, Morteza; Ahmadi, Monire

    2017-02-22

    Bee algorithm (BA) is an optimization algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution which can be proposed to feature selection. In this paper, shuffling cross-validation-BA (CV-BA) was applied to select the best descriptors that could describe the retention factor (log k) in the biopartitioning micellar chromatography (BMC) of 79 heterogeneous pesticides. Six descriptors were obtained using BA and then the selected descriptors were applied for model development using multiple linear regression (MLR). The descriptor selection was also performed using stepwise, genetic algorithm and simulated annealing methods and MLR was applied to model development and then the results were compared with those obtained from shuffling CV-BA. The results showed that shuffling CV-BA can be applied as a powerful descriptor selection method. Support vector machine (SVM) was also applied for model development using six selected descriptors by BA. The obtained statistical results using SVM were better than those obtained using MLR, as the root mean square error (RMSE) and correlation coefficient (R) for whole data set (training and test), using shuffling CV-BA-MLR, were obtained as 0.1863 and 0.9426, respectively, while these amounts for the shuffling CV-BA-SVM method were obtained as 0.0704 and 0.9922, respectively.

  3. Adaptive smoothing of high angular resolution diffusion-weighted imaging data by generalized cross-validation improves Q-ball orientation distribution function reconstruction.

    PubMed

    Metwalli, Nader S; Hu, Xiaoping P; Carew, John D

    2010-09-01

    Q-ball imaging (QBI) is a high angular resolution diffusion-weighted imaging (HARDI) technique for reconstructing the orientation distribution function (ODF). Some form of smoothing or regularization is typically required in the ODF reconstruction from low signal-to-noise ratio HARDI data. The amount of smoothing or regularization is usually set a priori at the discretion of the investigator. In this article, we apply an adaptive and objective means of smoothing the raw HARDI data using the smoothing splines on the sphere method with generalized cross-validation (GCV) to estimate the diffusivity profile in each voxel. Subsequently, we reconstruct the ODF, from the smoothed data, based on the Funk-Radon transform (FRT) used in QBI. The spline method was applied to both simulated data and in vivo human brain data. Simulated data show that the smoothing splines on the sphere method with GCV smoothing reduces the mean squared error in estimates of the ODF as compared with the standard analytical QBI approach. The human data demonstrate the utility of the method for estimating smooth ODFs.

  4. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7–8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  5. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7-8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  6. Cross-validation of serial optical coherence scanning and diffusion tensor imaging: a study on neural fiber maps in human medulla oblongata.

    PubMed

    Wang, Hui; Zhu, Junfeng; Reuter, Martin; Vinke, Louis N; Yendiki, Anastasia; Boas, David A; Fischl, Bruce; Akkin, Taner

    2014-10-15

    We established a strategy to perform cross-validation of serial optical coherence scanner imaging (SOCS) and diffusion tensor imaging (DTI) on a postmortem human medulla. Following DTI, the sample was serially scanned by SOCS, which integrates a vibratome slicer and a multi-contrast optical coherence tomography rig for large-scale three-dimensional imaging at microscopic resolution. The DTI dataset was registered to the SOCS space. An average correlation coefficient of 0.9 was found between the co-registered fiber maps constructed by fractional anisotropy and retardance contrasts. Pixelwise comparison of fiber orientations demonstrated good agreement between the DTI and SOCS measures. Details of the comparison were studied in regions exhibiting a variety of fiber organizations. DTI estimated the preferential orientation of small fiber tracts; however, it didn't capture their complex patterns as SOCS did. In terms of resolution and imaging depth, SOCS and DTI complement each other, and open new avenues for cross-modality investigations of the brain.

  7. FDDS: A Cross Validation Study.

    ERIC Educational Resources Information Center

    Sawyer, Judy Parsons

    The Family Drawing Depression Scale (FDDS) was created by Wright and McIntyre to provide a clear and reliable scoring method for the Kinetic Family Drawing as a procedure for detecting depression. A study was conducted to confirm the value of the FDDS as a systematic tool for interpreting family drawings with populations of depressed individuals.…

  8. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  9. Methane cross-validation between three Fourier transform spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, Gerrit; Walker, Kaley A.; Conway, Stephanie; Saitoh, Naoko; Boone, Chris D.; Strong, Kimberly; Drummond, James R.

    2016-05-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three data sets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier transform spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier transform infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Laboratory at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional collocation criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and

  10. Methane cross-validation between three Fourier Transform Spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2015-12-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three datasets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier Transform Spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Lab at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and between 1.5 and 3

  11. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  12. A cross validation study of deep brain stimulation targeting: from experts to atlas-based, segmentation-based and automatic registration algorithms.

    PubMed

    Castro, F Javier Sanchez; Pollo, Claudio; Meuli, Reto; Maeder, Philippe; Cuisenaire, Olivier; Cuadra, Meritxell Bach; Villemure, Jean-Guy; Thiran, Jean-Philippe

    2006-11-01

    Validation of image registration algorithms is a difficult task and open-ended problem, usually application-dependent. In this paper, we focus on deep brain stimulation (DBS) targeting for the treatment of movement disorders like Parkinson's disease and essential tremor. DBS involves implantation of an electrode deep inside the brain to electrically stimulate specific areas shutting down the disease's symptoms. The subthalamic nucleus (STN) has turned out to be the optimal target for this kind of surgery. Unfortunately, the STN is in general not clearly distinguishable in common medical imaging modalities. Usual techniques to infer its location are the use of anatomical atlases and visible surrounding landmarks. Surgeons have to adjust the electrode intraoperatively using electrophysiological recordings and macrostimulation tests. We constructed a ground truth derived from specific patients whose STNs are clearly visible on magnetic resonance (MR) T2-weighted images. A patient is chosen as atlas both for the right and left sides. Then, by registering each patient with the atlas using different methods, several estimations of the STN location are obtained. Two studies are driven using our proposed validation scheme. First, a comparison between different atlas-based and nonrigid registration algorithms with a evaluation of their performance and usability to locate the STN automatically. Second, a study of which visible surrounding structures influence the STN location. The two studies are cross validated between them and against expert's variability. Using this scheme, we evaluated the expert's ability against the estimation error provided by the tested algorithms and we demonstrated that automatic STN targeting is possible and as accurate as the expert-driven techniques currently used. We also show which structures have to be taken into account to accurately estimate the STN location.

  13. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  14. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  15. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  16. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  17. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  18. Near surface geotechnical and geophysical data cross validated for site characterization applications. The cases of selected accelerometric stations in Crete island (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, Constantinos; Tsangaratos, Paraskevas; Rozos, Dimitrios; Rondoyianni, Theodora; Vafidis, Antonis; Steiakakis, Emanouil; Agioutantis, Zacharias; Savvaidis, Alexandros; Soupios, Pantelis; Papadopoulos, Ioannis; Papadopoulos, Nikos; Sarris, Apostolos; Mangriotis, Maria-Dafni; Dikmen, Unal

    2015-04-01

    The near surface ground conditions are highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding deformations, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1-D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of the site characterization data in regards to the applied investigation techniques is presented by providing characteristic examples from the total number of thirteen sites. As an example of the gradual improvement of the knowledge about the ground conditions the case of AGN1 strong motion station, located at Agios Nikolaos city (Eastern Crete), is briefly presented. According to the medium scale geological map of IGME the station was supposed to be founded over limestone. The detailed geological mapping reveled that a few meters of loose alluvial deposits occupy the area, expected

  19. Direct spectral analysis of tea samples using 266 nm UV pulsed laser-induced breakdown spectroscopy and cross validation of LIBS results with ICP-MS.

    PubMed

    Gondal, M A; Habibullah, Y B; Baig, Umair; Oloore, L E

    2016-05-15

    Tea is one of the most common and popular beverages spanning vast array of cultures all over the world. The main nutritional benefits of drinking tea are its anti-oxidant properties, presumed protection against certain cancers, inhibition of inflammation and possible protective effects against diabetes. Laser induced breakdown spectrometer (LIBS) was assembled as a powerful tool for qualitative and quantitative analysis of various brands of tea samples using 266 nm pulsed UV laser. LIBS spectra for six brands of tea samples in the wavelength range of 200-900 nm was recorded and all elements present in our tea samples were identified. The major toxic elements detected in several brands of tea samples were bromine, chromium and minerals like iron, calcium, potassium and silicon. The spectral assignment was conducted prior to the determination of concentration of each element. For quantitative analysis, calibration curves were drawn for each element using standard samples prepared in known concentration in the tea matrix. The plasma parameters (electron temperature and electron density) were also determined prior to the tea samples spectroscopic analysis. The concentration of iron, chromium, potassium, bromine, copper, silicon and calcium detected in all tea samples was between 378-656, 96-124, 1421-6785, 99-1476, 17-36, 2-11 and 92-130 mg L(-1) respectively. The limits of detection estimated for Fe, Cr, K, Br, Cu, Si, Ca in tea samples were 22, 12, 14, 11, 6, 1 and 12 mg L(-1) respectively. To further confirm the accuracy of our LIBS results, we determined the concentration of each element present in tea samples by using standard analytical technique like ICP-MS. The concentrations detected with our LIBS system are in excellent agreement with ICP-MS results. The system assembled for spectral analysis in this work could be highly applicable for testing the quality and purity of food and also pharmaceuticals products.

  20. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  1. Using Genetic Distance to Infer the Accuracy of Genomic Prediction

    PubMed Central

    Scutari, Marco; Mackay, Ian

    2016-01-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  2. Feasibility and Diagnostic Accuracy of Ischemic Stroke Territory Recognition Based on Two-Dimensional Projections of Three-Dimensional Diffusion MRI Data

    PubMed Central

    Wrosch, Jana Katharina; Volbers, Bastian; Gölitz, Philipp; Gilbert, Daniel Frederic; Schwab, Stefan; Dörfler, Arnd; Kornhuber, Johannes; Groemer, Teja Wolfgang

    2015-01-01

    This study was conducted to assess the feasibility and diagnostic accuracy of brain artery territory recognition based on geoprojected two-dimensional maps of diffusion MRI data in stroke patients. In this retrospective study, multiplanar diffusion MRI data of ischemic stroke patients was used to create a two-dimensional map of the entire brain. To guarantee correct representation of the stroke, a computer-aided brain artery territory diagnosis was developed and tested for its diagnostic accuracy. The test recognized the stroke-affected brain artery territory based on the position of the stroke in the map. The performance of the test was evaluated by comparing it to the reference standard of each patient’s diagnosed stroke territory on record. This study was designed and conducted according to Standards for Reporting of Diagnostic Accuracy (STARD). The statistical analysis included diagnostic accuracy parameters, cross-validation, and Youden Index optimization. After cross-validation on a cohort of 91 patients, the sensitivity of this territory diagnosis was 81% with a specificity of 87%. With this, the projection of strokes onto a two-dimensional map is accurate for representing the affected stroke territory and can be used to provide a static and printable overview of the diffusion MRI data. The projected map is compatible with other two-dimensional data such as EEG and will serve as a useful visualization tool. PMID:26635717

  3. Accuracy of genomic selection models in a large population of open-pollinated families in white spruce

    PubMed Central

    Beaulieu, J; Doerksen, T; Clément, S; MacKay, J; Bousquet, J

    2014-01-01

    Genomic selection (GS) is of interest in breeding because of its potential for predicting the genetic value of individuals and increasing genetic gains per unit of time. To date, very few studies have reported empirical results of GS potential in the context of large population sizes and long breeding cycles such as for boreal trees. In this study, we assessed the effectiveness of marker-aided selection in an undomesticated white spruce (Picea glauca (Moench) Voss) population of large effective size using a GS approach. A discovery population of 1694 trees representative of 214 open-pollinated families from 43 natural populations was phenotyped for 12 wood and growth traits and genotyped for 6385 single-nucleotide polymorphisms (SNPs) mined in 2660 gene sequences. GS models were built to predict estimated breeding values using all the available SNPs or SNP subsets of the largest absolute effects, and they were validated using various cross-validation schemes. The accuracy of genomic estimated breeding values (GEBVs) varied from 0.327 to 0.435 when the training and the validation data sets shared half-sibs that were on average 90% of the accuracies achieved through traditionally estimated breeding values. The trend was also the same for validation across sites. As expected, the accuracy of GEBVs obtained after cross-validation with individuals of unknown relatedness was lower with about half of the accuracy achieved when half-sibs were present. We showed that with the marker densities used in the current study, predictions with low to moderate accuracy could be obtained within a large undomesticated population of related individuals, potentially resulting in larger gains per unit of time with GS than with the traditional approach. PMID:24781808

  4. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  5. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  6. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.

  7. Impact of selective genotyping in the training population on accuracy and bias of genomic selection.

    PubMed

    Zhao, Yusheng; Gowda, Manje; Longin, Friedrich H; Würschum, Tobias; Ranc, Nicolas; Reif, Jochen C

    2012-08-01

    Estimating marker effects based on routinely generated phenotypic data of breeding programs is a cost-effective strategy to implement genomic selection. Truncation selection in breeding populations, however, could have a strong impact on the accuracy to predict genomic breeding values. The main objective of our study was to investigate the influence of phenotypic selection on the accuracy and bias of genomic selection. We used experimental data of 788 testcross progenies from an elite maize breeding program. The testcross progenies were evaluated in unreplicated field trials in ten environments and fingerprinted with 857 SNP markers. Random regression best linear unbiased prediction method was used in combination with fivefold cross-validation based on genotypic sampling. We observed a substantial loss in the accuracy to predict genomic breeding values in unidirectional selected populations. In contrast, estimating marker effects based on bidirectional selected populations led to only a marginal decrease in the prediction accuracy of genomic breeding values. We concluded that bidirectional selection is a valuable approach to efficiently implement genomic selection in applied plant breeding programs.

  8. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  9. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  10. Can lncRNAs be indicators for the diagnosis of early onset or acute schizophrenia and distinguish major depressive disorder and generalized anxiety disorder?-A cross validation analysis.

    PubMed

    Cui, Xuelian; Niu, Wei; Kong, Lingming; He, Mingjun; Jiang, Kunhong; Chen, Shengdong; Zhong, Aifang; Li, Wanshuai; Lu, Jim; Zhang, Liyi

    2017-03-28

    Depression and anxiety are apparent symptoms in the early onset or acute phase of schizophrenia (SZ), which complicate timely diagnosis and treatment. It is imperative to seek an indicator to distinguish schizophrenia from depressive and anxiety disorders. Using lncRNA microarray profiling and RT-PCR, three up-regulated lncRNAs in SZ, six down-regulated lncRNAs in major depressive disorder (MDD), and three up-regulated lncRNAs in generalized anxiety disorder (GAD) had been identified as potential biomarkers. All the lncRNAs were, then, cross-validated in 40 SZ patients, 40 MDD patients, 40 GAD patients, and 40 normal controls. Compared with controls, three up-regulated SZ lncRNAs had a significantly down-regulated expression in GAD, and no remarkable differences existed between MDD and the controls. Additionally, the six down-regulated MDD lncRNAs were expressed in an opposite fashion in SZ, and the expression of the three up-regulated GAD lncRNAs were significantly different between SZ and GAD. These results indicate that the expression patterns of the three up-regulated SZ lncRNAs could not be completely replicated in MDD and GAD, and vice versa. Thus, these three SZ lncRNAs seem to be established as potential indicators for diagnosis of schizophrenia and distinguishing it from MDD and GAD.© 2017 Wiley Periodicals, Inc.

  11. Cross-Validation in Canonical Analysis.

    ERIC Educational Resources Information Center

    Taylor, Dianne L.

    The need for using invariance procedures to establish the external validity or generalizability of statistical results has been well documented. Invariance analysis is a tool that can be used to establish confidence in the replicability of research findings. Several approaches to invariance analysis are available that are broadly applicable across…

  12. Seed Quality Traits Can Be Predicted with High Accuracy in Brassica napus Using Genomic Data

    PubMed Central

    Liu, Peifa; Shi, Lei; Wang, Xiaohua; Wang, Meng; Meng, Jinling; Reif, Jochen Christoph

    2016-01-01

    Improving seed oil yield and quality are central targets in rapeseed (Brassica napus) breeding. The primary goal of our study was to examine and compare the potential and the limits of marker-assisted selection and genome-wide prediction of six important seed quality traits of B. napus. Our study is based on a bi-parental population comprising 202 doubled haploid lines and a diverse validation set including 117 B. napus inbred lines derived from interspecific crosses between B. rapa and B. carinata. We used phenotypic data for seed oil, protein, erucic acid, linolenic acid, stearic acid, and glucosinolate content. All lines were genotyped with a 60k SNP array. We performed five-fold cross-validations in combination with linkage mapping and four genome-wide prediction approaches in the bi-parental population. Quantitative trait loci (QTL) with large effects were detected for erucic acid, stearic acid, and glucosinolate content, blazing the trail for marker-assisted selection. Despite substantial differences in the complexity of the genetic architecture of the six traits, genome-wide prediction models had only minor impacts on the prediction accuracies. We evaluated the effects of training population size, marker density and phenotyping intensity on the prediction accuracy. The prediction accuracy in the independent and genetically very distinct validation set still amounted to 0.14 for protein content and 0.17 for oil content reflecting the utility of the developed calibration models even in very diverse backgrounds. PMID:27880793

  13. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  14. Reference measurement procedure for CSF amyloid beta (Aβ)1-42 and the CSF Aβ1-42 /Aβ1-40 ratio - a cross-validation study against amyloid PET.

    PubMed

    Pannee, Josef; Portelius, Erik; Minthon, Lennart; Gobom, Johan; Andreasson, Ulf; Zetterberg, Henrik; Hansson, Oskar; Blennow, Kaj

    2016-11-01

    A clinical diagnosis of Alzheimer's disease is currently made on the basis of results from cognitive tests in combination with medical history and general clinical evaluation, but the peptide amyloid-beta (Aβ) in cerebrospinal fluid (CSF) is increasingly used as a biomarker for amyloid pathology in clinical trials and in recently proposed revised clinical criteria for Alzheimer's disease. Recent analytical developments have resulted in mass spectrometry (MS) reference measurement procedures for absolute quantification of Aβ1-42 in CSF. The CSF Aβ1-42 /Aβ1-40 ratio has been suggested to improve the detection of cerebral amyloid deposition, by compensating for inter-individual variations in total Aβ production. Our aim was to cross-validate the reference measurement procedure as well as the Aβ1-42 /Aβ1-40 and Aβ1-42 /Aβ1-38 ratios in CSF, measured by high-resolution MS, with the cortical level of Aβ fibrils as measured by amyloid ((18) F-flutemetamol) positron emission tomography (PET). We included 100 non-demented patients with cognitive symptoms from the Swedish BioFINDER study, all of whom had undergone both lumbar puncture and (18) F-flutemetamol PET. Comparing CSF Aβ1-42 concentrations with (18) F-flutemetamol PET showed high concordance with an area under the receiver operating characteristic curve of 0.85 and a sensitivity and specificity of 82% and 81%, respectively. The ratio of Aβ1-42 /Aβ1-40 or Aβ1-42 /Aβ1-38 significantly improved concordance with an area under the receiver operating characteristic curve of 0.95 and a sensitivity and specificity of 96% and 91%, respectively. These results show that the CSF Aβ1-42 /Aβ1-40 and Aβ1-42 /Aβ1-38 ratios using the described MS method are strongly associated with cortical Aβ fibrils measured by (18) F-flutemetamol PET.

  15. Accuracy of direct genomic values for functional traits in Brown Swiss cattle.

    PubMed

    Kramer, M; Erbe, M; Seefried, F R; Gredler, B; Bapst, B; Bieber, A; Simianer, H

    2014-03-01

    In this study, direct genomic values for the functional traits general temperament, milking temperament, aggressiveness, rank order in herd, milking speed, udder depth, position of labia, and days to first heat in Brown Swiss dairy cattle were estimated based on ~777,000 (777 K) single nucleotide polymorphism (SNP) information from 1,126 animals. Accuracy of direct genomic values was assessed by a 5-fold cross-validation with 10 replicates. Correlations between deregressed proofs and direct genomic values were 0.63 for general temperament, 0.73 for milking temperament, 0.69 for aggressiveness, 0.65 for rank order in herd, 0.69 for milking speed, 0.71 for udder depth, 0.66 for position of labia, and 0.74 for days to first heat. Using the information of ~54,000 (54K) SNP led to only marginal deviations in the observed accuracy. Trying to predict the 20% youngest bulls led to correlations of 0.55, 0.77, 0.73, 0.55, 0.64, 0.59, 0.67, and 0.77, respectively, for the traits listed above. Using a novel method to estimate the accuracy of a direct genomic value (defined as correlation between direct genomic value and true breeding value and accounting for the correlation between direct genomic values and conventional breeding values) revealed accuracies of 0.37, 0.20, 0.19, 0.27, 0.48, 0.45, 0.36, and 0.12, respectively, for the traits listed above. These values are much smaller but probably also more realistic than accuracies based on correlations, given the heritabilities and samples sizes in this study. Annotation of the largest estimated SNP effects revealed 2 candidate genes affecting the traits general temperament and days to first heat.

  16. Revealing latent value of clinically acquired CTs of traumatic brain injury through multi-atlas segmentation in a retrospective study of 1,003 with external cross-validation

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Kelly, Patrick D.; Asman, Andrew J.; Kang, Hakmook; Patel, Mayur B.; Landman, Bennett A.

    2015-03-01

    Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold crossvalidation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R2 to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.

  17. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed Central

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B.; Hartz, Sarah M.; Johnson, Eric O.; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L.

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen’s kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  18. High Accuracy Time Transfer Synchronization

    DTIC Science & Technology

    1994-12-01

    HIGH ACCURACY TIME TRANSFER SYNCHRONIZATION Paul Wheeler, Paul Koppang, David Chalmers, Angela Davis, Anthony Kubik and William Powell U.S. Naval...Observatory Washington, DC 20392 Abstract In July 1994, the US Naval Observatory (USNO) Time Service System Engineering Division conducted a...field test to establish a baseline accuracy for two-way satellite time transfer synchro- nization. Three Hewlett-Packard model 5071 high performance

  19. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  20. Effect of predictor traits on accuracy of genomic breeding values for feed intake based on a limited cow reference population.

    PubMed

    Pszczola, M; Veerkamp, R F; de Haas, Y; Wall, E; Strabel, T; Calus, M P L

    2013-11-01

    The genomic breeding value accuracy of scarcely recorded traits is low because of the limited number of phenotypic observations. One solution to increase the breeding value accuracy is to use predictor traits. This study investigated the impact of recording additional phenotypic observations for predictor traits on reference and evaluated animals on the genomic breeding value accuracy for a scarcely recorded trait. The scarcely recorded trait was dry matter intake (DMI, n = 869) and the predictor traits were fat-protein-corrected milk (FPCM, n = 1520) and live weight (LW, n = 1309). All phenotyped animals were genotyped and originated from research farms in Ireland, the United Kingdom and the Netherlands. Multi-trait REML was used to simultaneously estimate variance components and breeding values for DMI using available predictors. In addition, analyses using only pedigree relationships were performed. Breeding value accuracy was assessed through cross-validation (CV) and prediction error variance (PEV). CV groups (n = 7) were defined by splitting animals across genetic lines and management groups within country. With no additional traits recorded for the evaluated animals, both CV- and PEV-based accuracies for DMI were substantially higher for genomic than for pedigree analyses (CV: max. 0.26 for pedigree and 0.33 for genomic analyses; PEV: max. 0.45 and 0.52, respectively). With additional traits available, the differences between pedigree and genomic accuracies diminished. With additional recording for FPCM, pedigree accuracies increased from 0.26 to 0.47 for CV and from 0.45 to 0.48 for PEV. Genomic accuracies increased from 0.33 to 0.50 for CV and from 0.52 to 0.53 for PEV. With additional recording for LW instead of FPCM, pedigree accuracies increased to 0.54 for CV and to 0.61 for PEV. Genomic accuracies increased to 0.57 for CV and to 0.60 for PEV. With both FPCM and LW available for evaluated animals, accuracy was highest (0.62 for CV and 0.61 for PEV in

  1. Astronomic Position Accuracy Capability Study.

    DTIC Science & Technology

    1979-10-01

    portion of F. E. Warren AFB, Wyoming. The three points were called THEODORE ECC , TRACY, and JIM and consisted of metal tribrachs plastered to cinder...sets were computed as a deviation from the standard. Accuracy figures were determined from these residuals. Homo - geneity of variances was tested using

  2. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  3. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  4. Inventory accuracy in 60 days!

    PubMed

    Miller, G J

    1997-08-01

    Despite great advances in manufacturing technology and management science, thousands of organizations still don't have a handle on basic inventory accuracy. Many companies don't even measure it properly, or at all, and lack corrective action programs to improve it. This article offers an approach that has proven successful a number of times, when companies were quite serious about making improvements. Not only can it be implemented, but also it can likely be implemented within 60 days per area, if properly managed. The hardest part is selling people on the need to improve and then keeping them motivated. The net cost of such a program? Probably less than nothing, since the benefits gained usually far exceed the costs. Improved inventory accuracy can aid in enhancing customer service, determining purchasing and manufacturing priorities, reducing operating costs, and increasing the accuracy of financial records. This article also addresses the gap in contemporary literature regarding accuracy program features for repetitive, JIT, cellular, and process- and project-oriented environments.

  5. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  6. Cross-validated stable-isotope dilution GC-MS and LC-MS/MS assays for monoacylglycerol lipase (MAGL) activity by measuring arachidonic acid released from the endocannabinoid 2-arachidonoyl glycerol.

    PubMed

    Kayacelebi, Arslan Arinc; Schauerte, Celina; Kling, Katharina; Herbers, Jan; Beckmann, Bibiana; Engeli, Stefan; Jordan, Jens; Zoerner, Alexander A; Tsikas, Dimitrios

    2017-03-15

    2-Arachidonoyl glycerol (2AG) is an endocannabinoid that activates cannabinoid (CB) receptors CB1 and CB2. Monoacylglycerol lipase (MAGL) inactivates 2AG through hydrolysis to arachidonic acid (AA) and glycerol, thus modulating the activity at CB receptors. In the brain, AA released from 2AG by the action of MAGL serves as a substrate for cyclooxygenases which produce pro-inflammatory prostaglandins. Here we report stable-isotope GC-MS and LC-MS/MS assays for the reliable measurement of MAGL activity. The assays utilize deuterium-labeled 2AG (d8-2AG; 10μM) as the MAGL substrate and measure deuterium-labeled AA (d8-AA; range 0-1μM) as the MAGL product. Unlabelled AA (d0-AA, 1μM) serves as the internal standard. d8-AA and d0-AA are extracted from the aqueous buffered incubation mixtures by ethyl acetate. Upon solvent evaporation the residue is reconstituted in the mobile phase prior to LC-MS/MS analysis or in anhydrous acetonitrile for GC-MS analysis. LC-MS/MS analysis is performed in the negative electrospray ionization mode by selected-reaction monitoring the mass transitions [M-H](-)→[M-H - CO2](-), i.e., m/z 311→m/z 267 for d8-AA and m/z 303→m/z 259 for d0-AA. Prior to GC-MS analysis d8-AA and d0-AA were converted to their pentafluorobenzyl (PFB) esters by means of PFB-Br. GC-MS analysis is performed in the electron-capture negative-ion chemical ionization mode by selected-ion monitoring the ions [M-PFB](-), i.e., m/z 311 for d8-AA and m/z 303 for d0-AA. The GC-MS and LC-MS/MS assays were cross-validated. Linear regression analysis between the concentration (range, 0-1μM) of d8-AA measured by LC-MS/MS (y) and that by GC-MS (x) revealed a straight line (r(2)=0.9848) with the regression equation y=0.003+0.898x, indicating a good agreement. In dog liver, we detected MAGL activity that was inhibitable by the MAGL inhibitor JZL-184. Exogenous eicosatetraynoic acid is suitable as internal standard for the quantitative determination of d8-AA produced from d8

  7. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  8. Accuracy of implant impression techniques.

    PubMed

    Assif, D; Marshak, B; Schmidt, A

    1996-01-01

    Three impression techniques were assessed for accuracy in a laboratory cast that simulated clinical practice. The first technique used autopolymerizing acrylic resin to splint the transfer copings. The second involved splinting of the transfer copings directly to an acrylic resin custom tray. In the third, only impression material was used to orient the transfer copings. The accuracy of stone casts with implant analogs was measured against a master framework. The fit of the framework on the casts was tested using strain gauges. The technique using acrylic resin to splint transfer copings in the impression material was significantly more accurate than the two other techniques. Stresses observed in the framework are described and discussed with suggestions to improve clinical and laboratory techniques.

  9. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  10. Enhancing and evaluating diagnostic accuracy.

    PubMed

    Swets, J A; Getty, D J; Pickett, R M; D'Orsi, C J; Seltzer, S E; McNeil, B J

    1991-01-01

    Techniques that may enhance diagnostic accuracy in clinical settings were tested in the context of mammography. Statistical information about the relevant features among those visible in a mammogram and about their relative importances in the diagnosis of breast cancer was the basis of two decision aids for radiologists: a checklist that guides the radiologist in assigning a scale value to each significant feature of the images of a particular case, and a computer program that merges those scale values optimally to estimate a probability of malignancy. A test set of approximately 150 proven cases (including normals and benign and malignant lesions) was interpreted by six radiologists, first in their usual manner and later with the decision aids. The enhancing effect of these feature-analytic techniques was analyzed across subsets of cases that were restricted progressively to more and more difficult cases, where difficulty was defined in terms of the radiologists' judgements in the standard reading condition. Accuracy in both standard and enhanced conditions decreased regularly and substantially as case difficulty increased, but differentially, such that the enhancement effect grew regularly and substantially. For the most difficult case sets, the observed increases in accuracy translated into an increase of about 0.15 in sensitivity (true-positive proportion) for a selected specificity (true-negative proportion) of 0.85 or a similar increase in specificity for a selected sensitivity of 0.85. That measured accuracy can depend on case-set difficulty to different degrees for two diagnostic approaches has general implications for evaluation in clinical medicine. Comparative, as well as absolute, assessments of diagnostic performances--for example, of alternative imaging techniques--may be distorted by inadequate treatments of this experimental variable. Subset analysis, as defined and illustrated here, can be useful in alleviating the problem.

  11. A simple and fast approach to prediction of protein secondary structure from multiply aligned sequences with accuracy above 70%.

    PubMed Central

    Mehta, P. K.; Heringa, J.; Argos, P.

    1995-01-01

    To improve secondary structure predictions in protein sequences, the information residing in multiple sequence alignments of substituted but structurally related proteins is exploited. A database comprised of 70 protein families and a total of 2,500 sequences, some of which were aligned by tertiary structural superpositions, was used to calculate residue exchange weight matrices within alpha-helical, beta-strand, and coil substructures, respectively. Secondary structure predictions were made based on the observed residue substitutions in local regions of the multiple alignments and the largest possible associated exchange weights in each of the three matrix types. Comparison of the observed and predicted secondary structure on a per-residue basis yielded a mean accuracy of 72.2%. Individual alpha-helix, beta-strand, and coil states were respectively predicted at 66.7, and 75.8% correctness, representing a well-balanced three-state prediction. The accuracy level, verified by cross-validation through jack-knife tests on all protein families, dropped, on average, to only 70.9%, indicating the rigor of the prediction procedure. On the basis of robustness, conceptual clarity, accuracy, and executable efficiency, the method has considerable advantage, especially with its sole reliance on amino acid substitutions within structurally related proteins. PMID:8580842

  12. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  13. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

    PubMed

    Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

    2014-04-01

    Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular.

  14. Assessment of the genomic prediction accuracy for feed efficiency traits in meat-type chickens

    PubMed Central

    Wang, Jie; Ma, Jie; Shu, Dingming; Lund, Mogens Sandø; Su, Guosheng; Qu, Hao

    2017-01-01

    Feed represents the major cost of chicken production. Selection for improving feed utilization is a feasible way to reduce feed cost and greenhouse gas emissions. The objectives of this study were to investigate the efficiency of genomic prediction for feed conversion ratio (FCR), residual feed intake (RFI), average daily gain (ADG) and average daily feed intake (ADFI) and to assess the impact of selection for feed efficiency traits FCR and RFI on eviscerating percentage (EP), breast muscle percentage (BMP) and leg muscle percentage (LMP) in meat-type chickens. Genomic prediction was assessed using a 4-fold cross-validation for two validation scenarios. The first scenario was a random family sampling validation (CVF), and the second scenario was a random individual sampling validation (CVR). Variance components were estimated based on the genomic relationship built with single nucleotide polymorphism markers. Genomic estimated breeding values (GEBV) were predicted using a genomic best linear unbiased prediction model. The accuracies of GEBV were evaluated in two ways: the correlation between GEBV and corrected phenotypic value divided by the square root of heritability, i.e., the correlation-based accuracy, and model-based theoretical accuracy. Breeding values were also predicted using a conventional pedigree-based best linear unbiased prediction model in order to compare accuracies of genomic and conventional predictions. The heritability estimates of FCR and RFI were 0.29 and 0.50, respectively. The heritability estimates of ADG, ADFI, EP, BMP and LMP ranged from 0.34 to 0.53. In the CVF scenario, the correlation-based accuracy and the theoretical accuracy of genomic prediction for FCR were slightly higher than those for RFI. The correlation-based accuracies for FCR, RFI, ADG and ADFI were 0.360, 0.284, 0.574 and 0.520, respectively, and the model-based theoretical accuracies were 0.420, 0.414, 0.401 and 0.382, respectively. In the CVR scenario, the correlation

  15. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  16. Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset

    PubMed Central

    Lipps, David; Devineni, Sree

    2016-01-01

    MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy

  17. The accuracy of the ACSM and a new cycle ergometry equation for young women.

    PubMed

    Latin, R W; Berg, K E

    1994-05-01

    The purpose of this study was to determine the accuracy of the American College of Sports Medicine's (ACSM) equation for estimating the oxygen cost of exercise performed by women on a cycle ergometer. Sixty healthy, young females performed a five-stage submaximal cycle ergometry test. Results indicated the SEE for the predicted oxygen values ranged from 79-156 ml.min-1, with total errors (E) ranging from 107-275 ml.min-1. Correlations between the actual and predicted values ranged from r = -0.22 to r = 0.38. The r, SEE, and E were 0.96, 118, and 172, respectively for all of the power loads combined. A revised equation was developed based upon the actual VO2-power relationship. This equation appears as: VO2 (ml.min-1) = kgm.min-1 x 1.6 ml.min-1 + ((3.5 ml.kg-1.min-1 x kg body weight) + 205 ml.min-1). Cross validation was performed on an independent sample of 40 subjects. All of the SEE and E were lower and all of the correlations were higher at each power load in the validation sample. Since the revised equation is based on an actual VO2-power relationship, it would appear that it provides a more accurate depiction of the cycle ergometry VO2-power relationship for women. These facts support its use.

  18. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  19. Genomic Prediction in Pea: Effect of Marker Density and Training Population Size and Composition on Prediction Accuracy

    PubMed Central

    Tayeh, Nadim; Klein, Anthony; Le Paslier, Marie-Christine; Jacquin, Françoise; Houtin, Hervé; Rond, Céline; Chabert-Martinello, Marianne; Magnin-Robert, Jean-Bernard; Marget, Pascal; Aubert, Grégoire; Burstin, Judith

    2015-01-01

    Pea is an important food and feed crop and a valuable component of low-input farming systems. Improving resistance to biotic and abiotic stresses is a major breeding target to enhance yield potential and regularity. Genomic selection (GS) has lately emerged as a promising technique to increase the accuracy and gain of marker-based selection. It uses genome-wide molecular marker data to predict the breeding values of candidate lines to selection. A collection of 339 genetic resource accessions (CRB339) was subjected to high-density genotyping using the GenoPea 13.2K SNP Array. Genomic prediction accuracy was evaluated for thousand seed weight (TSW), the number of seeds per plant (NSeed), and the date of flowering (BegFlo). Mean cross-environment prediction accuracies reached 0.83 for TSW, 0.68 for NSeed, and 0.65 for BegFlo. For each trait, the statistical method, the marker density, and/or the training population size and composition used for prediction were varied to investigate their effects on prediction accuracy: the effect was large for the size and composition of the training population but limited for the statistical method and marker density. Maximizing the relatedness between individuals in the training and test sets, through the CDmean-based method, significantly improved prediction accuracies. A cross-population cross-validation experiment was further conducted using the CRB339 collection as a training population set and nine recombinant inbred lines populations as test set. Prediction quality was high with mean Q2 of 0.44 for TSW and 0.59 for BegFlo. Results are discussed in the light of current efforts to develop GS strategies in pea. PMID:26635819

  20. Accuracy of perturbative master equations.

    PubMed

    Fleming, C H; Cummings, N I

    2011-03-01

    We consider open quantum systems with dynamics described by master equations that have perturbative expansions in the system-environment interaction. We show that, contrary to intuition, full-time solutions of order-2n accuracy require an order-(2n+2) master equation. We give two examples of such inaccuracies in the solutions to an order-2n master equation: order-2n inaccuracies in the steady state of the system and order-2n positivity violations. We show how these arise in a specific example for which exact solutions are available. This result has a wide-ranging impact on the validity of coupling (or friction) sensitive results derived from second-order convolutionless, Nakajima-Zwanzig, Redfield, and Born-Markov master equations.

  1. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  2. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  3. Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.

    2009-12-01

    Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The

  4. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  5. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  6. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  7. [Accuracy of HDL cholesterol measurements].

    PubMed

    Niedmann, P D; Luthe, H; Wieland, H; Schaper, G; Seidel, D

    1983-02-01

    The widespread use of different methods for the determination of HDL-cholesterol (in Europe: sodium phosphotungstic acid/MgCl2) in connection with enzymatic procedures (in the USA: heparin/MnCl2 followed by the Liebermann-Burchard method) but common reference values makes it necessary to evaluate not only accuracy, specificity, and precision of the precipitation step but also of the subsequent cholesterol determination. A high ratio of serum vs. concentrated precipitation reagent (10:1 V/V) leads to the formation of variable amounts of delta-3.5-cholestadiene. This substance is not recognized by cholesterol oxidase but leads to an 1.6 times overestimation by the Liebermann-Burchard method. Therefore, errors in HDL-cholesterol determination should be considered and differences up to 30% may occur between HDL-cholesterol values determined by the different techniques (heparin/MnCl2 - Liebermann-Burchard and NaPW/MgCl2-CHOD-PAP).

  8. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  9. Phase segmentation of X-ray computer tomography rock images using machine learning techniques: an accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo

    2016-07-01

    Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.

  10. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  11. Spacecraft attitude determination accuracy from mission experience

    NASA Astrophysics Data System (ADS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-10-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  12. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  13. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator

    PubMed Central

    Khazendar, S.; Sayasneh, A.; Al-Assam, H.; Du, H.; Kaijser, J.; Ferrara, L.; Timmerman, D.; Jassim, S.; Bourne, T.

    2015-01-01

    Introduction: Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. Objectives: In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Materials and methods: Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. Results: The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). Conclusion: We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered. PMID:25897367

  14. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  15. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  16. [Upon scientific accuracy scheme at clinical specialties].

    PubMed

    Ortega Calvo, M

    2006-11-01

    Will be medical specialties like sciences in the future? Yes, progressively they will. Accuracy in clinical specialties will be dissimilar in the future because formal-logic mathematics, quantum physics advances and relativity theory utilities. Evidence based medicine is now helping to clinical specialties on scientific accuracy by the way of decision theory.

  17. Sound source localization identification accuracy: bandwidth dependencies.

    PubMed

    Yost, William A; Zhong, Xuan

    2014-11-01

    Sound source localization accuracy using a sound source identification task was measured in the front, right quarter of the azimuth plane as rms (root-mean-square) error (degrees) for stimulus conditions in which the bandwidth (1/20 to 2 octaves wide) and center frequency (250, 2000, 4000 Hz) of 200-ms noise bursts were varied. Tones of different frequencies (250, 2000, 4000 Hz) were also used. As stimulus bandwidth increases, there is an increase in sound source localization identification accuracy (i.e., rms error decreases). Wideband stimuli (>1 octave wide) produce best sound source localization accuracy (~6°-7° rms error), and localization accuracy for these wideband noise stimuli does not depend on center frequency. For narrow bandwidths (<1 octave) and tonal stimuli, accuracy does depend on center frequency such that highest accuracy is obtained for low-frequency stimuli (centered on 250 Hz), worse accuracy for mid-frequency stimuli (centered on 2000 Hz), and intermediate accuracy for high-frequency stimuli (centered on 4000 Hz).

  18. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  19. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  20. The Accuracy of Gender Stereotypes Regarding Occupations.

    ERIC Educational Resources Information Center

    Beyer, Sylvia; Finnegan, Andrea

    Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…

  1. Evaluating the accuracy of selenodesic reference grids

    NASA Technical Reports Server (NTRS)

    Koptev, A. A.

    1974-01-01

    Estimates were made of the accuracy of reference point grids using the technique of calculating the errors from theoretical analysis. Factors taken into consideration were: telescope accuracy, number of photographs, and libration amplitude. To solve the problem, formulas were used for the relationship between the coordinates of lunar surface points and their images on the photograph.

  2. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  3. Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy.

    PubMed

    Aljabar, P; Heckemann, R A; Hammers, A; Hajnal, J V; Rueckert, D

    2009-07-01

    Quantitative research in neuroimaging often relies on anatomical segmentation of human brain MR images. Recent multi-atlas based approaches provide highly accurate structural segmentations of the brain by propagating manual delineations from multiple atlases in a database to a query subject and combining them. The atlas databases which can be used for these purposes are growing steadily. We present a framework to address the consequent problems of scale in multi-atlas segmentation. We show that selecting a custom subset of atlases for each query subject provides more accurate subcortical segmentations than those given by non-selective combination of random atlas subsets. Using a database of 275 atlases, we tested an image-based similarity criterion as well as a demographic criterion (age) in a leave-one-out cross-validation study. Using a custom ranking of the database for each subject, we combined a varying number n of atlases from the top of the ranked list. The resulting segmentations were compared with manual reference segmentations using Dice overlap. Image-based selection provided better segmentations than random subsets (mean Dice overlap 0.854 vs. 0.811 for the estimated optimal subset size, n=20). Age-based selection resulted in a similar marked improvement. We conclude that selecting atlases from large databases for atlas-based brain image segmentation improves the accuracy of the segmentations achieved. We show that image similarity is a suitable selection criterion and give results based on selecting atlases by age that demonstrate the value of meta-information for selection.

  4. The impact of free or standardized lifestyle and urine sampling protocol on metabolome recognition accuracy.

    PubMed

    Wallner-Liebmann, Sandra; Gralka, Ewa; Tenori, Leonardo; Konrad, Manuela; Hofmann, Peter; Dieber-Rotheneder, Martina; Turano, Paola; Luchinat, Claudio; Zatloukal, Kurt

    2015-01-01

    Urine contains a clear individual metabolic signature, although embedded within a large daily variability. Given the potential of metabolomics to monitor disease onset from deviations from the "healthy" metabolic state, we have evaluated the effectiveness of a standardized lifestyle in reducing the "metabolic" noise. Urine was collected from 24 (5 men and 19 women) healthy volunteers over a period of 10 days: phase I, days 1-7 in a real-life situation; phase II, days 8-10 in a standardized diet and day 10 plus exercise program. Data on dietary intake and physical activity have been analyzed by a nation-specific software and monitored by published protocols. Urine samples have been analyzed by (1)H NMR followed by multivariate statistics. The individual fingerprint emerged and consolidated with increasing the number of samples and reaches ~100 % cross-validated accuracy for about 40 samples. Diet standardization reduced both the intra-individual and the interindividual variability; the effect was due to a reduction in the dispersion of the concentration values of several metabolites. Under standardized diet, however, the individual phenotype was still clearly visible, indicating that the individual's signature was a strong feature of the metabolome. Consequently, cohort studies designed to investigate the relation of individual metabolic traits and nutrition require multiple samples from each participant even under highly standardized lifestyle conditions in order to exploit the analytical potential of metabolomics. We have established criteria to facilitate design of urine metabolomic studies aimed at monitoring the effects of drugs, lifestyle, dietary supplements, and for accurate determination of signatures of diseases.

  5. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  6. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  7. Accuracy of Small Base Metal Dental Castings,

    DTIC Science & Technology

    1980-07-10

    aCCURACY OF SMALL BASE METAL DENTAL CASTINGS,(U) M JUL 80 E A HUBET, S 6 VERMILYEA, M .J KUFFLER UNCLASSIFIED NE7 hhhhh *EN UN~CLASSIFIED SECURITY...TPCCSI70NO. 3. RECIPIENT’S .CATALOG NUMBER I _% dSutte 5. TYPE OF REPORT & PERIOD COVERED Accuracy of Small Base Metal Dental Castings Manuscript S...base metal- alloys is countered by their inadequate casting accuracy . Until this problem can be overcome, the acceptance of such alloys for routine use

  8. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  9. Sun-pointing programs and their accuracy

    SciTech Connect

    Zimmerman, J.C.

    1981-05-01

    Several sun-pointing programs and their accuracy are described. FORTRAN program listings are given. Program descriptions are given for both Hewlett-Packard (HP-67) and Texas Instruments (TI-59) hand-held calculators.

  10. Critical thinking and accuracy of nurses' diagnoses.

    PubMed

    Lunney, Margaret

    2003-01-01

    Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities.

  11. Neural Mechanisms of Speed-Accuracy Tradeoff

    PubMed Central

    Heitz, Richard P.; Schall, Jeffrey D.

    2012-01-01

    SUMMARY Intelligent agents balance speed of responding with accuracy of deciding. Stochastic accumulator models commonly explain this speed-accuracy tradeoff by strategic adjustment of response threshold. Several laboratories identify specific neurons in prefrontal and parietal cortex with this accumulation process, yet no neurophysiological correlates of speed-accuracy tradeoff have been described. We trained macaque monkeys to trade speed for accuracy on cue during visual search and recorded the activity of neurons in the frontal eye field. Unpredicted by any model, we discovered that speed-accuracy tradeoff is accomplished through several distinct adjustments. Visually responsive neurons modulated baseline firing rate, sensory gain, and the duration of perceptual processing. Movement neurons triggered responses with activity modulated in a direction opposite of model predictions. Thus, current stochastic accumulator models provide an incomplete description of the neural processes accomplishing speed-accuracy tradeoffs. The diversity of neural mechanisms was reconciled with the accumulator framework through an integrated accumulator model constrained by requirements of the motor system. PMID:23141072

  12. Microsatellite mutation rates in the eastern tiger salamander (Ambystoma tigrinum tigrinum) differ 10-fold across loci.

    PubMed

    Bulut, Zafer; McCormick, Cory R; Gopurenko, David; Williams, Rod N; Bos, David H; DeWoody, J Andrew

    2009-07-01

    Microsatellites are commonly used for mapping and population genetics because of their high heterozygosities and allelic variability (i.e., polymorphism). Microsatellite markers are generally more polymorphic than other types of molecular markers such as allozymes or SNPs because the insertions/deletions that give rise to microsatellite variability are relatively common compared to nucleotide substitutions. Nevertheless, direct evidence of microsatellite mutation rates (MMRs) is lacking in most vertebrate groups despite the importance of such estimates to key population parameters (e.g., genetic differentiation or theta = 4N (e)micro). Herein, we present empirical data on MMRs in eastern tiger salamanders (Ambystoma tigrinum tigrinum). We conducted captive breeding trials and genotyped over 1,000 offspring at a suite of microsatellite loci. These data on 7,906 allele transfers provide the first direct estimates of MMRs in amphibians, and they illustrate that MMRs can vary by more than an order of magnitude across loci within a given species (one locus had ten mutations whereas the others had none).

  13. Cesarean Delivery Rates Vary 10-Fold Among US Hospitals; Reducing Variation May Address Quality, Cost Issues

    PubMed Central

    Kozhimannil, Katy Backes; Law, Michael R.; Virnig, Beth A.

    2013-01-01

    Cesarean delivery is the most commonly performed surgical procedure in the United States, and cesarean rates are increasing. Working with 2009 data from 593 US hospitals nationwide, we found that cesarean rates varied tenfold across hospitals, from 7.1 percent to 69.9 percent. Even for women with lower-risk pregnancies, in which more limited variation might be expected, cesarean rates varied fifteen-fold, from 2.4 percent to 36.5 percent. Thus, vast differences in practice patterns are likely to be driving the costly overuse of cesarean delivery in many US hospitals. Because Medicaid pays for nearly half of US births, government efforts to decrease variation are warranted. We focus on four promising directions for reducing these variations, including better coordination of maternity care, more data collection and measurement, tying Medicaid payment to quality improvement, and enhancing patient-centered decision making through public reporting. PMID:23459732

  14. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  15. Effect of species rarity on the accuracy of species distribution models for reptiles and amphibians in southern California

    USGS Publications Warehouse

    Franklin, J.; Wejnert, K.E.; Hathaway, S.A.; Rochester, C.J.; Fisher, R.N.

    2009-01-01

    Aim: Several studies have found that more accurate predictive models of species' occurrences can be developed for rarer species; however, one recent study found the relationship between range size and model performance to be an artefact of sample prevalence, that is, the proportion of presence versus absence observations in the data used to train the model. We examined the effect of model type, species rarity class, species' survey frequency, detectability and manipulated sample prevalence on the accuracy of distribution models developed for 30 reptile and amphibian species. Location: Coastal southern California, USA. Methods: Classification trees, generalized additive models and generalized linear models were developed using species presence and absence data from 420 locations. Model performance was measured using sensitivity, specificity and the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot based on twofold cross-validation, or on bootstrapping. Predictors included climate, terrain, soil and vegetation variables. Species were assigned to rarity classes by experts. The data were sampled to generate subsets with varying ratios of presences and absences to test for the effect of sample prevalence. Join count statistics were used to characterize spatial dependence in the prediction errors. Results: Species in classes with higher rarity were more accurately predicted than common species, and this effect was independent of sample prevalence. Although positive spatial autocorrelation remained in the prediction errors, it was weaker than was observed in the species occurrence data. The differences in accuracy among model types were slight. Main conclusions: Using a variety of modelling methods, more accurate species distribution models were developed for rarer than for more common species. This was presumably because it is difficult to discriminate suitable from unsuitable habitat for habitat generalists, and not as an artefact of the

  16. Concurrent Classification Accuracy of the HIV Dementia Scale for HIV-associated Neurocognitive Disorders in the CHARTER Cohort

    PubMed Central

    Sakamoto, Maiko; Marcotte, Thomas D.; Umlauf, Anya; Franklin, Donald; Heaton, Robert K.; Ellis, Ronald J.; Letendre, Scott; Alexander, Terry; McCutchan, J. Allen; Morgan, Erin E.; Woods, Steven Paul; Collier, Ann C.; Marra, Christina M.; Clifford, David B.; Gelman, Benjamin B.; McArthur, Justin C.; Morgello, Susan; Simpson, David; Grant, Igor

    2012-01-01

    Background The HIV Dementia Scale (HDS) was developed to screen for HIV-associated Neurocognitive Disorders (HAND), but concerns have persisted regarding its substandard sensitivity. This study aimed to examine the classification accuracy of the HDS using raw and norm-based cutpoints, and to evaluate the contribution of the HDS subtests to predicting HAND. Methods 1,580 HIV-infected participants from 6 U.S. sites completed the HDS, and a gold standard neuropsychological battery, on which 51% of participants were impaired. Results: Sensitivity and specificity to HAND using the standard raw HDS cutpoint were 24% and 92%, respectively. The raw HDS subtests of attention, recall, and psychomotor speed significantly contributed to classification of HAND, while visuomotor construction contributed the least. A modified raw cutpoint of 14 yielded sensitivity of 66% and specificity of 61%, with cross-validation. Using norms also significantly improved sensitivity to 69% with a concomitant reduction of specificity to 56%, while the positive predictive value declined from 75% to 62% and negative predictive value improved from 54% to 64%. The HDS showed similarly modest rates of sensitivity and specificity among subpopulations of individuals with minimal comorbidity and successful viral suppression. Conclusions Findings indicate that while the HDS is a statistically significant predictor of HAND, particularly when adjusted for demographic factors, its relatively low diagnostic classification accuracy continues to hinder its clinical utility. A raw cutpoint of 14 greatly improved the sensitivity of the previously established raw cutscore, but may be subject to ceiling effects, particularly on repeat assessments. PMID:23111573

  17. Effect of flying altitude, scanning angle and scanning mode on the accuracy of ALS based forest inventory

    NASA Astrophysics Data System (ADS)

    Keränen, Juha; Maltamo, Matti; Packalen, Petteri

    2016-10-01

    Airborne laser scanning (ALS) is a widely used technology in the mapping of environment and forests. Data acquisition costs and the accuracy of the forest inventory are closely dependent on some extrinsic parameters of the ALS survey. These parameters have been assessed in numerous studies about a decade ago, but since then ALS devices have developed and it is possible that previous findings do not hold true with newer technology. That is why, the effect of flying altitudes (2000, 2500 or 3000 m), scanning angles (±15° and ±20° off nadir) and scanning modes (single- and multiple pulses in air) with the area-based approach using a Leica ALS70HA-laser scanner was studied here. The study was conducted in a managed pine-dominated forest area in Finland, where eight separate discrete-return ALS data were acquired. The comparison of datasets was based on the bootstrap approach with 5-fold cross validation. Results indicated that the narrower scanning angle (±15° i.e. 30°) led to slightly more accurate estimates of plot volume (RMSE%: 21-24 vs. 22.5-25) and mean height (RMSE%: 8.5-11 vs. 9-12). We also tested the use case where the models are constructed using one data and then applied to other data gathered with different parameters. The most accurate models were identified using the bootstrap approach and applied to different datasets with and without refitting. The bias increased without refitting the models (bias%: volume 0 ± 10, mean height 0 ± 3), but in most cases the results did not differ much in terms of RMSE%. This confirms previous observations that models should only be used for datasets collected under similar data acquisition conditions. We also calculated the proportions of echoes as a function of height for different echo categories. This indicated that the accuracy of the inventory is affected more by the height distribution than the proportions of echo categories.

  18. Linear Models for Field Trials, Smoothing, and Cross-Validation.

    DTIC Science & Technology

    1984-01-01

    in fertility and other environmental factors. Blocking methods are customarily used, even when blocks have no physical meaning in thei% experiment, but... TEST CHART NATIONAL BUREAU nT STANDARDS 196b3 A . - -. .. .• • ei .. o .’ ’."., " ". " ’ . .J . . .~i• j,... . . . . . .. . . . . . . . . .,.. - i...34-""bution/ __Availability Code..""s’’ Technical Summary Report #2779 December 1984 1J ABSTRACT Spatial methods for the analysis of agricultural field

  19. Internet Attack Traceback: Cross-Validation and Pebble-Trace

    DTIC Science & Technology

    2013-02-28

    of key: a permutation of values in 0— 255 • Entropy verifier: the candidate key with largest entropy drop is the real key Pattern filter...Characteristics of symmetric keys • Randomness of ciphertext mostly from symmetric encryption schemes Pattern filter Entropy filter Verifier...Characteristic verifer + Entropy verifier) Correct keys Candidate key regions Candidate key regions Memory image Network traffic Step 2

  20. Cross Validation of Selection of Variables in Multiple Regression.

    DTIC Science & Technology

    1979-12-01

    Bomber IBMNAV * BOMNAV Navigation-Cargo * * CARNAV Sensory-Fighter * SF FGTSEN Sensory - Bomber * SB BOMSEN Communication - Fighter IFGCOM CF FGTCOM...of Variables Variable No. Recode FGTNAV 1 0 LESS THAN 1 1 OR OVER BONNAV 2 0 LESS THAN S1 OR OVER CARNAV 3 0 LESS THAN S1 OR OVER FGTSEN 4 0 LESS THAN...cc x x x x x x x CARNAV X X X X X X x XMTR x X X X X x PD X x X X X X UP x- *Those which AID determined. 44 This value was lowered to 3 in the

  1. Cross-Validation of the Self-Motivation Inventory.

    ERIC Educational Resources Information Center

    Heiby, Elaine M.; And Others

    Because the literature suggests that aerobic exercise is associated with physical health and psychological well-being, there is a concern with discovering how to improve adherence to such exercise. There is growing evidence that self-motivation, as measured by the Dishman Self-Motivation Inventory (SMI), is a redictor of adherence to regular…

  2. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore, preliminary analyses should be carried out under the control and authority of an established international professional Organization or Association, before any final political decision is made by ISO to select a specific Environmental Models, like for example IGRF and DGRF. Of course, Commissions responsible for checking the consistency of definitions, methods and algorithms for data processing might consider to delegate specific tasks (e.g. bench-marking the technical tools, the calibration procedures, the methods of data analysis, and the software algorithms employed in building the different types of models, as well as their usage) to private, intergovernmental or international organization/agencies (e.g.: NASA, ESA, AGU, EGU, COSPAR, . . . ); eventually, the latter should report conclusions to the Commissions members appointed by IAGA or any established authority like IUGG.

  3. Cross-Validation of Experimental USAF Pilot Training Performance Models

    DTIC Science & Technology

    1990-05-01

    BAT) battery Is a computerized test battery designed to measure individual differences In psychomotor skills , information processing abilities...differences In psychomotor skills , information processing abilities, personality and attitudes helped to reduce uncertainty In making pilot candidate

  4. Cross-Ethnic Cross Validation of Aptitude Batteries.

    ERIC Educational Resources Information Center

    Pike, Lewis W.; Mahoney, Margaret H.

    How well an aptitude test battery predicts rated job performance for Negroes and whites, and how well a battery selected for one group predicts performance for the other, is examined. Supervisory ratings were used as the criterion of job performance. Tests selected to predict performance in the job of Medical Laboratory technicians were validated…

  5. [Cross validity of the UCLA Loneliness Scale factorization].

    PubMed

    Borges, Africa; Prieto, Pedro; Ricchetti, Giacinto; Hernández-Jorge, Carmen; Rodríguez-Naveiras, Elena

    2008-11-01

    Loneliness is an unpleasant experience that takes place when a person's network of social relationships is significantly deficient in quality and quantity, and it is associated with negative feelings. Loneliness is a fundamental construct that provides information about several psychological processes, especially in the clinical setting. It is well known that this construct is related to isolation and emotional loneliness. One of the most well-known psychometric instruments to measure loneliness is the revised UCLA Loneliness Scale, which has been factorized in several populations. A controversial issue related to the UCLA Loneliness Scale is its factor structure, because the test was first created based on a unidimensional structure; however, subsequent research has proved that its structure may be bipolar or even multidimensional. In the present work, the UCLA Loneliness Scale was completed by two populations: Spanish and Italian undergraduate university students. Results show a multifactorial structure in both samples. This research presents a theoretically and analytically coherent bifactorial structure.

  6. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  7. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  8. Activity monitor accuracy in persons using canes.

    PubMed

    Wendland, Deborah Michael; Sprigle, Stephen H

    2012-01-01

    The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6) who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001) when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use.

  9. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  10. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  11. Recognition of extraversion level based on handwriting and support vector machines.

    PubMed

    Górska, Zuzanna; Janicki, Artur

    2012-06-01

    This study investigated whether it is possible to train a machine to discriminate levels of extraversion based on handwriting variables. Support vector machines (SVMs) were used as a learning algorithm. Handwriting of 883 people (404 men, 479 women) was examined. Extraversion was measured using the Polish version of the NEO-Five Factor Inventory. The handwriting samples were described by 48 variables. The support vector machines were separately trained and tested for each sex, using 10-fold cross-validation. Good recognition accuracy (around .7) was achieved for 10 handwriting variables, different for men and women. The results suggest the existence of a relationship between handwriting elements and extraversion.

  12. Predicting caspase substrate cleavage sites based on a hybrid SVM-PSSM method.

    PubMed

    Li, Dandan; Jiang, Zhenran; Yu, Weiming; Du, Lei

    2010-12-01

    Caspases play an important role in many critical non-apoptosis processes by cleaving relevant substrates at cleavage sites. Identification of caspase substrate cleavage sites is the key to understand these processes. This paper proposes a hybrid method using support vector machine (SVM) in conjunction with position specific scoring matrices (PSSM) for caspase substrate cleavage sites prediction. Three encoding schemes including orthonormal binary encoding, BLOSUM62 matrix profile and PSSM profile of neighborhood surrounding the substrate cleavage sites were regarded as the input of SVM. The 10-fold cross validation results demonstrate that the SVM-PSSM method performs well with an overall accuracy of 97.619% on a larger dataset.

  13. An adaptive deep learning approach for PPG-based identification.

    PubMed

    Jindal, V; Birjandtalab, J; Pouyan, M Baran; Nourani, M

    2016-08-01

    Wearable biosensors have become increasingly popular in healthcare due to their capabilities for low cost and long term biosignal monitoring. This paper presents a novel two-stage technique to offer biometric identification using these biosensors through Deep Belief Networks and Restricted Boltzman Machines. Our identification approach improves robustness in current monitoring procedures within clinical, e-health and fitness environments using Photoplethysmography (PPG) signals through deep learning classification models. The approach is tested on TROIKA dataset using 10-fold cross validation and achieved an accuracy of 96.1%.

  14. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  15. The measurement accuracy of passive radon instruments.

    PubMed

    Beck, T R; Foerster, E; Buchröder, H; Schmidt, V; Döring, J

    2014-01-01

    This paper analyses the data having been gathered from interlaboratory comparisons of passive radon instruments over 10 y with respect to the measurement accuracy. The measurement accuracy is discussed in terms of the systematic and the random measurement error. The analysis shows that the systematic measurement error of the most instruments issued by professional laboratory services can be within a range of ±10 % from the true value. A single radon measurement has an additional random measurement error, which is in the range of up to ±15 % for high exposures to radon (>2000 kBq h m(-3)). The random measurement error increases for lower exposures. The analysis especially applies to instruments with solid-state nuclear track detectors and results in proposing criteria for testing the measurement accuracy. Instruments with electrets and charcoal have also been considered, but the low stock of data enables only a qualitative discussion.

  16. Why do delayed summaries improve metacomprehension accuracy?

    PubMed

    Anderson, Mary C M; Thiede, Keith W

    2008-05-01

    We showed that metacomprehension accuracy improved when participants (N=87 college students) wrote summaries of texts prior to judging their comprehension; however, accuracy only improved when summaries were written after a delay, not when written immediately after reading. We evaluated two hypotheses proposed to account for this delayed-summarization effect (the accessibility hypothesis and the situation model hypothesis). The data suggest that participants based metacomprehension judgments more on the gist of texts when they generated summaries after a delay; whereas, they based judgments more on details when they generated summaries immediately after reading. Focusing on information relevant to the situation model of a text (the gist of a text) produced higher levels of metacomprehension accuracy, which is consistent with situation model hypothesis.

  17. Increasing Accuracy and Increasing Tension in Ho

    NASA Astrophysics Data System (ADS)

    Freedman, Wendy L.

    2017-01-01

    The Hubble Constant, Ho, provides a measure of the current expansion rate of the universe. In recent decades, there has been a huge increase in the accuracy with which extragalactic distances, and hence Ho, can be measured. While the historical factor-of-two uncertainty in Ho has been resolved, a new discrepancy has arisen between the values of Ho measured in the local universe, and that estimated from cosmic microwave background measurements, assuming a Lambda cold dark matter model. I will review the advances that have led to the increase in accuracy in measurements of Ho, as well as describe exciting future prospects with the James Webb Space Telescope (JWST) and Gaia, which will make it feasible to measure extragalactic distances at percent-level accuracy in the next decade.

  18. The accuracy of Halley's cometary orbits

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    The accuracy of a scientific computation depends in the main on the data fed in and the analysis method used. This statement is certainly true of Edmond Halley's cometary orbit work. Considering the 420 comets that had been seen before Halley's era of orbital calculation (1695 - 1702) only 24, according to him, had been observed well enough for their orbits to be calculated. Two questions are considered in this paper. Do all the orbits listed by Halley have the same accuracy? and, secondly, how accurate was Halley's method of calculation?

  19. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  20. Size-Dependent Accuracy of Nanoscale Thermometers.

    PubMed

    Alicki, Robert; Leitner, David M

    2015-07-23

    The accuracy of two classes of nanoscale thermometers is estimated in terms of size and system-dependent properties using the spin-boson model. We consider solid state thermometers, where the energy splitting is tuned by thermal properties of the material, and fluorescent organic thermometers, in which the fluorescence intensity depends on the thermal population of conformational states of the thermometer. The results of the theoretical model compare well with the accuracy reported for several nanothermometers that have been used to measure local temperature inside living cells.

  1. Estimation and Accuracy after Model Selection

    PubMed Central

    Efron, Bradley

    2013-01-01

    Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the Cp criterion, and a Lasso-based estimation problem. PMID:25346558

  2. Predictive accuracy in the neuroprediction of rearrest

    PubMed Central

    Aharoni, Eyal; Mallett, Joshua; Vincent, Gina M.; Harenski, Carla L.; Calhoun, Vince D.; Sinnott-Armstrong, Walter; Gazzaniga, Michael S.; Kiehl, Kent A.

    2014-01-01

    A recently published study by the present authors (Aharoni et al., 2013) reported evidence that functional changes in the anterior cingulate cortex (ACC) within a sample of 96 criminal offenders who were engaged in a Go/No-Go impulse control task significantly predicted their rearrest following release from prison. In an extended analysis, we use discrimination and calibration techniques to test the accuracy of these predictions relative to more traditional models and their ability to generalize to new observations in both full and reduced models. Modest to strong discrimination and calibration accuracy were found, providing additional support for the utility of neurobiological measures in predicting rearrest. PMID:24720689

  3. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  4. Accuracy of genomic breeding values in multibreed beef cattle populations derived from deregressed breeding values and phenotypes.

    PubMed

    Weber, K L; Thallman, R M; Keele, J W; Snelling, W M; Bennett, G L; Smith, T P L; McDaneld, T G; Allan, M F; Van Eenennaam, A L; Kuehn, L A

    2012-12-01

    Genomic selection involves the assessment of genetic merit through prediction equations that allocate genetic variation with dense marker genotypes. It has the potential to provide accurate breeding values for selection candidates at an early age and facilitate selection for expensive or difficult to measure traits. Accurate across-breed prediction would allow genomic selection to be applied on a larger scale in the beef industry, but the limited availability of large populations for the development of prediction equations has delayed researchers from providing genomic predictions that are accurate across multiple beef breeds. In this study, the accuracy of genomic predictions for 6 growth and carcass traits were derived and evaluated using 2 multibreed beef cattle populations: 3,358 crossbred cattle of the U.S. Meat Animal Research Center Germplasm Evaluation Program (USMARC_GPE) and 1,834 high accuracy bull sires of the 2,000 Bull Project (2000_BULL) representing influential breeds in the U.S. beef cattle industry. The 2000_BULL EPD were deregressed, scaled, and weighted to adjust for between- and within-breed heterogeneous variance before use in training and validation. Molecular breeding values (MBV) trained in each multibreed population and in Angus and Hereford purebred sires of 2000_BULL were derived using the GenSel BayesCπ function (Fernando and Garrick, 2009) and cross-validated. Less than 10% of large effect loci were shared between prediction equations trained on (USMARC_GPE) relative to 2000_BULL although locus effects were moderately to highly correlated for most traits and the traits themselves were highly correlated between populations. Prediction of MBV accuracy was low and variable between populations. For growth traits, MBV accounted for up to 18% of genetic variation in a pooled, multibreed analysis and up to 28% in single breeds. For carcass traits, MBV explained up to 8% of genetic variation in a pooled, multibreed analysis and up to 42% in

  5. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  6. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  7. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  8. Adult Metacomprehension: Judgment Processes and Accuracy Constraints

    ERIC Educational Resources Information Center

    Zhao, Qin; Linderholm, Tracy

    2008-01-01

    The objective of this paper is to review and synthesize two interrelated topics in the adult metacomprehension literature: the bases of metacomprehension judgment and the constraints on metacomprehension accuracy. Our review shows that adult readers base their metacomprehension judgments on different types of information, including experiences…

  9. Task Speed and Accuracy Decrease When Multitasking

    ERIC Educational Resources Information Center

    Lin, Lin; Cockerham, Deborah; Chang, Zhengsi; Natividad, Gloria

    2016-01-01

    As new technologies increase the opportunities for multitasking, the need to understand human capacities for multitasking continues to grow stronger. Is multitasking helping us to be more efficient? This study investigated the multitasking abilities of 168 participants, ages 6-72, by measuring their task accuracy and completion time when they…

  10. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  11. Least squares polynomial fits and their accuracy

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1977-01-01

    Equations are presented which attempt to fit least squares polynomials to tables of date. It is concluded that much data are needed to reduce the measurement error standard deviation by a significant amount, however at certain points great accuracy is attained.

  12. A microwave position sensor with submillimeter accuracy

    NASA Astrophysics Data System (ADS)

    Stelzer, A.; Diskus, C. G.; Lubke, K.; Thim, H. W.

    1999-12-01

    Design and characteristics of a prototype distance sensor are presented in this paper. The radar front-end operates at 35 GHz and applies six-port technology and direct frequency measurement. The sensor makes use of both frequency-modulated continuous wave and interferometer principles and is capable of measuring distance with a very high accuracy of ±0.1 mm.

  13. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  14. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  15. High Accuracy Transistor Compact Model Calibrations

    SciTech Connect

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  16. Direct Behavior Rating: Considerations for Rater Accuracy

    ERIC Educational Resources Information Center

    Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.

    2014-01-01

    Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…

  17. Accuracy of Depth to Water Measurements

    EPA Pesticide Factsheets

    Accuracy of depth to water measurements is an issue identified by the Forum as a concern of Superfund decision-makers as they attempt to determine directions of ground-water flow, areas of recharge or discharge, the hydraulic characteristics of...

  18. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  19. Bayesian Methods for Medical Test Accuracy

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    Bayesian methods for medical test accuracy are presented, beginning with the basic measures for tests with binary scores: true positive fraction, false positive fraction, positive predictive values, and negative predictive value. The Bayesian approach is taken because of its efficient use of prior information, and the analysis is executed with a Bayesian software package WinBUGS®. The ROC (receiver operating characteristic) curve gives the intrinsic accuracy of medical tests that have ordinal or continuous scores, and the Bayesian approach is illustrated with many examples from cancer and other diseases. Medical tests include X-ray, mammography, ultrasound, computed tomography, magnetic resonance imaging, nuclear medicine and tests based on biomarkers, such as blood glucose values for diabetes. The presentation continues with more specialized methods suitable for measuring the accuracies of clinical studies that have verification bias, and medical tests without a gold standard. Lastly, the review is concluded with Bayesian methods for measuring the accuracy of the combination of two or more tests. PMID:26859485

  20. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  1. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  2. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  3. Method for measuring centroid algorithm accuracy

    NASA Technical Reports Server (NTRS)

    Klein, S.; Liewer, K.

    2002-01-01

    This paper will describe such a method for measuring the accuracy of centroid algorithms using a relatively inexpensive setup consisting of a white light source, lenses, a CCD camea, an electro-strictive actuator, and a DAC (Digital-to-Analog Converter), and employing embedded PowerPC, VxWorks, and Solaris based software.

  4. High accuracy gaseous x-ray detectors

    SciTech Connect

    Smith, G.C.

    1983-11-01

    An outline is given of the design and operation of high accuracy position-sensitive x-ray detectors suitable for experiments using synchrotron radiation. They are based on the gas proportional detector, with position readout using a delay line; a detailed examination is made of factors which limit spatial resolution. Individual wire readout may be used for extremely high counting rates.

  5. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  6. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  7. Medial Patellofemoral Ligament Reconstruction Femoral Tunnel Accuracy

    PubMed Central

    Hiemstra, Laurie A.; Kerslake, Sarah; Lafave, Mark

    2017-01-01

    Background: Medial patellofemoral ligament (MPFL) reconstruction is a procedure aimed to reestablish the checkrein to lateral patellar translation in patients with symptomatic patellofemoral instability. Correct femoral tunnel position is thought to be crucial to successful MPFL reconstruction, but the accuracy of this statement in terms of patient outcomes has not been tested. Purpose: To assess the accuracy of femoral tunnel placement in an MPFL reconstruction cohort and to determine the correlation between tunnel accuracy and a validated disease-specific, patient-reported quality-of-life outcome measure. Study Design: Case series; Level of evidence, 4. Methods: Between June 2008 and February 2014, a total of 206 subjects underwent an MPFL reconstruction. Lateral radiographs were measured to determine the accuracy of the femoral tunnel by measuring the distance from the center of the femoral tunnel to the Schöttle point. Banff Patella Instability Instrument (BPII) scores were collected a mean 24 months postoperatively. Results: A total of 155 (79.5%) subjects had adequate postoperative lateral radiographs and complete BPII scores. The mean duration of follow-up (±SD) was 24.4 ± 8.2 months (range, 12-74 months). Measurement from the center of the femoral tunnel to the Schöttle point resulted in 143 (92.3%) tunnels being categorized as “good” or “ideal.” There were 8 failures in the cohort, none of which occurred in malpositioned tunnels. The mean distance from the center of the MPFL tunnel to the center of the Schöttle point was 5.9 ± 4.2 mm (range, 0.5-25.9 mm). The mean postoperative BPII score was 65.2 ± 22.5 (range, 9.2-100). Pearson r correlation demonstrated no statistically significant relationship between accuracy of femoral tunnel position and BPII score (r = –0.08; 95% CI, –0.24 to 0.08). Conclusion: There was no evidence of a correlation between the accuracy of MPFL reconstruction femoral tunnel in relation to the Schöttle point and

  8. Accuracy of genotype imputation in sheep breeds.

    PubMed

    Hayes, B J; Bowman, P J; Daetwyler, H D; Kijas, J W; van der Werf, J H J

    2012-02-01

    Although genomic selection offers the prospect of improving the rate of genetic gain in meat, wool and dairy sheep breeding programs, the key constraint is likely to be the cost of genotyping. Potentially, this constraint can be overcome by genotyping selection candidates for a low density (low cost) panel of SNPs with sparse genotype coverage, imputing a much higher density of SNP genotypes using a densely genotyped reference population. These imputed genotypes would then be used with a prediction equation to produce genomic estimated breeding values. In the future, it may also be desirable to impute very dense marker genotypes or even whole genome re-sequence data from moderate density SNP panels. Such a strategy could lead to an accurate prediction of genomic estimated breeding values across breeds, for example. We used genotypes from 48 640 (50K) SNPs genotyped in four sheep breeds to investigate both the accuracy of imputation of the 50K SNPs from low density SNP panels, as well as prospects for imputing very dense or whole genome re-sequence data from the 50K SNPs (by leaving out a small number of the 50K SNPs at random). Accuracy of imputation was low if the sparse panel had less than 5000 (5K) markers. Across breeds, it was clear that the accuracy of imputing from sparse marker panels to 50K was higher if the genetic diversity within a breed was lower, such that relationships among animals in that breed were higher. The accuracy of imputation from sparse genotypes to 50K genotypes was higher when the imputation was performed within breed rather than when pooling all the data, despite the fact that the pooled reference set was much larger. For Border Leicesters, Poll Dorsets and White Suffolks, 5K sparse genotypes were sufficient to impute 50K with 80% accuracy. For Merinos, the accuracy of imputing 50K from 5K was lower at 71%, despite a large number of animals with full genotypes (2215) being used as a reference. For all breeds, the relationship of

  9. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  10. Accuracy of distance measurements in biplane angiography

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Oishi, Satoru; Koster, David; Schroth, Gerhard

    1997-05-01

    Distance measurements of the vascular system of the brain can be derived from biplanar digital subtraction angiography (2p-DSA). The measurements are used for planning of minimal invasive surgical procedures. Our 90 degree-fixed-angle G- ring angiography system has the potential of acquiring pairs of such images with high geometric accuracy. The sizes of vessels and aneurysms are estimated applying a fast and accurate extraction method in order to select an appropriate surgical strategy. Distance computation from 2p-DSA is carried out in three steps. First, the boundary of the structure to be measured is detected based on zero-crossings and closeness to user-specified end points. Subsequently, the 3D location of the center of the structure is computed from the centers of gravity of its two projections. This location is used to reverse the magnification factor caused by the cone-shaped projection of the x-rays. Since exact measurements of possibly very small structures are crucial to the usefulness in surgical planning, we identified mechanical and computational influences on the geometry which may have an impact on the measurement accuracy. A study with phantoms is presented distinguishing between the different effects and enabling the computation of an optimal overall exactness. Comparing this optimum with results of distance measurements on phantoms whose exact size and shape is known, we found, that the measurement error for structures of size of 20 mm was less than 0.05 mm on average and 0.50 mm at maximum. The maximum achievable accuracy of 0.15 mm was in most cases exceeded by less than 0.15 mm. This accuracy surpasses by far the requirements for the above mentioned surgery application. The mechanic accuracy of the fixed-angle biplanar system meets the requirements for computing a 3D reconstruction of the small vessels of the brain. It also indicates, that simple measurements will be possible on systems being less accurate.

  11. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  12. ICan: an optimized ion-current-based quantification procedure with enhanced quantitative accuracy and sensitivity in biomarker discovery.

    PubMed

    Tu, Chengjian; Sheng, Quanhu; Li, Jun; Shen, Xiaomeng; Zhang, Ming; Shyr, Yu; Qu, Jun

    2014-12-05

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥ 1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods.

  13. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  14. Response time accuracy in Apple Macintosh computers.

    PubMed

    Neath, Ian; Earle, Avery; Hallett, Darcy; Surprenant, Aimée M

    2011-06-01

    The accuracy and variability of response times (RTs) collected on stock Apple Macintosh computers using USB keyboards was assessed. A photodiode detected a change in the screen's luminosity and triggered a solenoid that pressed a key on the keyboard. The RTs collected in this way were reliable, but could be as much as 100 ms too long. The standard deviation of the measured RTs varied between 2.5 and 10 ms, and the distributions approximated a normal distribution. Surprisingly, two recent Apple-branded USB keyboards differed in their accuracy by as much as 20 ms. The most accurate RTs were collected when an external CRT was used to display the stimuli and Psychtoolbox was able to synchronize presentation with the screen refresh. We conclude that RTs collected on stock iMacs can detect a difference as small as 5-10 ms under realistic conditions, and this dictates which types of research should or should not use these systems.

  15. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  16. Accuracy of forecasts in strategic intelligence.

    PubMed

    Mandel, David R; Barnes, Alan

    2014-07-29

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4-0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement.

  17. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  18. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  19. Boresighting Issues for High Accuracy TSPI Sensors

    DTIC Science & Technology

    2015-04-29

    UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) 412 RANS/ENRT Edwards Air Force Base, CA...navigation sensor to accurately report the heading, roll, and pitch of an aircraft, the angular offset between the sensor inertial coordinate system and...the aircraft coordinate system, the “boresight”, must be measured . Errors in the boresight measurements directly affect the accuracy of the navigation

  20. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  1. Secure Fingerprint Identification of High Accuracy

    DTIC Science & Technology

    2014-01-01

    work on secure face recognition ([12], [29] and others), DNA matching ([35], [6], and others), iris code comparisons ([9], [7]), fingerprint ...1 Secure Fingerprint Identification of High Accuracy Marina Blanton and Siddharth Saraph Department of Computer Science and Engineering University of...In this work, we treat the problem of privacy- preserving matching of two fingerprints , which can be used for secure fingerprint authentication and

  2. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  3. Accuracy of Stokes integration for geoid computation

    NASA Astrophysics Data System (ADS)

    Ismail, Zahra; Jamet, Olivier; Altamimi, Zuheir

    2014-05-01

    Geoid determination by remove-compute-restore (RCR) technique involves the application of Stokes's integral on reduced gravity anomalies. Reduced gravity anomalies are obtained through interpolation after removing low degree gravity signal from space spherical harmonic model and high frequency from topographical effects and cover a spectre ranging from degree 150-200. Stokes's integral is truncated to a limited region around the computation point producing an error that will be reducing by a modification of Stokes's kernel. We study Stokes integral accuracy on synthetic signal of various frequency ranges, produced with EGM2008 spherical harmonic coefficients up to degree 2000. We analyse the integration error according to the frequency range of signal, the resolution of gravity anomaly grid and the radius of Stokes integration. The study shows that the behaviour of the relative errors is frequency independent. The standard Stokes kernel is though insufficient to produce 1cm geoid accuracy without a removal of the major part of the gravity signal up to degree 600. The Integration over an area of radius greater than 3 degree does not improve accuracy improvement. The results are compared to a similar experiment using the modified Stokes kernel formula (Ellmann2004, Sjöberg2003). References: Ellmann, A. (2004) The geoid for the Baltic countries determined by least-squares modification of Stokes formula. Sjöberg, LE (2003). A general model of modifying Stokes formula and its least-squares solution Journal of Geodesy, 77. 459-464.

  4. Speed versus accuracy in collective decision making.

    PubMed Central

    Franks, Nigel R; Dornhaus, Anna; Fitzsimmons, Jon P; Stevens, Martin

    2003-01-01

    We demonstrate a speed versus accuracy trade-off in collective decision making. House-hunting ant colonies choose a new nest more quickly in harsh conditions than in benign ones and are less discriminating. The errors that occur in a harsh environment are errors of judgement not errors of omission because the colonies have discovered all of the alternative nests before they initiate an emigration. Leptothorax albipennis ants use quorum sensing in their house hunting. They only accept a nest, and begin rapidly recruiting members of their colony, when they find within it a sufficient number of their nest-mates. Here we show that these ants can lower their quorum thresholds between benign and harsh conditions to adjust their speed-accuracy trade-off. Indeed, in harsh conditions these ants rely much more on individual decision making than collective decision making. Our findings show that these ants actively choose to take their time over judgements and employ collective decision making in benign conditions when accuracy is more important than speed. PMID:14667335

  5. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  6. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  7. Ultra-wideband ranging precision and accuracy

    NASA Astrophysics Data System (ADS)

    MacGougan, Glenn; O'Keefe, Kyle; Klukas, Richard

    2009-09-01

    This paper provides an overview of ultra-wideband (UWB) in the context of ranging applications and assesses the precision and accuracy of UWB ranging from both a theoretical perspective and a practical perspective using real data. The paper begins with a brief history of UWB technology and the most current definition of what constitutes an UWB signal. The potential precision of UWB ranging is assessed using Cramer-Rao lower bound analysis. UWB ranging methods are described and potential error sources are discussed. Two types of commercially available UWB ranging radios are introduced which are used in testing. Actual ranging accuracy is assessed from line-of-sight testing under benign signal conditions by comparison to high-accuracy electronic distance measurements and to ranges derived from GPS real-time kinematic positioning. Range measurements obtained in outdoor testing with line-of-sight obstructions and strong reflection sources are compared to ranges derived from classically surveyed positions. The paper concludes with a discussion of the potential applications for UWB ranging.

  8. Wind measurement accuracy for the NASA scatterometer

    NASA Astrophysics Data System (ADS)

    Long, David G.; Oliphant, Travis

    1997-09-01

    The NASA Scatterometer (NSCAT) is designed to make measurements of the normalized radar backscatter coefficient ((sigma) o) of the ocean's surface. The measured (sigma) o is a function of the viewing geometry and the surface roughness due to wind-generated waves. By making multiple measurements of the same location from different azimuth angles it is possible to retrieve the near-surface wind speed and direction with the aid of a Geophysical Model Function (GMF) which relates wind and (sigma) o. The wind is estimated from the noisy (sigma) o measurements using maximum likelihood techniques. The probability density of the measured (sigma) o is assumed to be Gaussian with a variance that depends on the true (sigma) o and therefore the wind through the GMF and the measurements from different azimuth angles are assumed independent in estimating the wind. In order to estimate the accuracy of the retrieved wind, we derive the Cramer-Reo (CR) bound for wind estimation from scatterometer measurements. We show that the CR bound can be used as an error bar on the estimated wind. The role of geophysical modeling error in the GMF is considered and shown to play a significant role in the wind accuracy. Estimates of the accuracy of NSCAT measurements are given along with other scatterometer geometries and types.

  9. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  10. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    PubMed Central

    Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  11. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    PubMed

    Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  12. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  13. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  14. Climate Change Accuracy: Requirements and Economic Value

    NASA Astrophysics Data System (ADS)

    Wielicki, B. A.; Cooke, R.; Mlynczak, M. G.; Lukashin, C.; Thome, K. J.; Baize, R. R.

    2014-12-01

    Higher than normal accuracy is required to rigorously observe decadal climate change. But what level is needed? How can this be quantified? This presentation will summarize a new more rigorous and quantitative approach to determining the required accuracy for climate change observations (Wielicki et al., 2013, BAMS). Most current global satellite observations cannot meet this accuracy level. A proposed new satellite mission to resolve this challenge is CLARREO (Climate Absolute Radiance and Refractivity Observatory). CLARREO is designed to achieve advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra (Wielicki et al., Oct. 2013 BAMS). The CLARREO spectrometers are designed to serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and to greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar passive satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, SPOT, etc). Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A study has been carried out to quantify the economic benefits of such an advance as part of a rigorous and complete climate observing system. The study concludes that the economic value is $12 Trillion U.S. dollars in Net Present Value for a nominal discount rate of 3% (Cooke et al. 2013, J. Env. Sys. Dec.). A brief summary of these two studies and their implications for the future of climate science will be presented.

  15. Accuracy requirements. [for monitoring of climate changes

    NASA Technical Reports Server (NTRS)

    Delgenio, Anthony

    1993-01-01

    Satellite and surface measurements, if they are to serve as a climate monitoring system, must be accurate enough to permit detection of changes of climate parameters on decadal time scales. The accuracy requirements are difficult to define a priori since they depend on unknown future changes of climate forcings and feedbacks. As a framework for evaluation of candidate Climsat instruments and orbits, we estimate the accuracies that would be needed to measure changes expected over two decades based on theoretical considerations including GCM simulations and on observational evidence in cases where data are available for rates of change. One major climate forcing known with reasonable accuracy is that caused by the anthropogenic homogeneously mixed greenhouse gases (CO2, CFC's, CH4 and N2O). Their net forcing since the industrial revolution began is about 2 W/sq m and it is presently increasing at a rate of about 1 W/sq m per 20 years. Thus for a competing forcing or feedback to be important, it needs to be of the order of 0.25 W/sq m or larger on this time scale. The significance of most climate feedbacks depends on their sensitivity to temperature change. Therefore we begin with an estimate of decadal temperature change. Presented are the transient temperature trends simulated by the GISS GCM when subjected to various scenarios of trace gas concentration increases. Scenario B, which represents the most plausible near-term emission rates and includes intermittent forcing by volcanic aerosols, yields a global mean surface air temperature increase Delta Ts = 0.7 degrees C over the time period 1995-2015. This is consistent with the IPCC projection of about 0.3 degrees C/decade global warming (IPCC, 1990). Several of our estimates below are based on this assumed rate of warming.

  16. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).

  17. Ultrasound-Guided Needle Technique Accuracy

    PubMed Central

    Johnson, Angela N.; Peiffer, Jeffery S.; Halmann, Nahi; Delaney, Luke; Owen, Cindy A.; Hersh, Jeff

    2017-01-01

    Background and Objectives Ultrasound-guided regional anesthesia facilitates an approach to sensitive targets such as nerve clusters without contact or inadvertent puncture. We compared accuracy of needle placement with a novel passive magnetic ultrasound needle guidance technology (NGT) versus conventional ultrasound (CU) with echogenic needles. Methods Sixteen anesthesiologists and 19 residents performed a series of 16 needle insertion tasks each, 8 using NGT (n = 280) and 8 using CU (n = 280), in high-fidelity porcine phantoms. Tasks were stratified based on aiming to contact (target-contact) or place in close proximity with (target-proximity) targets, needle gauge (no. 18/no. 22), and in-plane (IP) or out-of-plane (OOP) approach. Distance to the target, task completion by aim, number of passes, and number of tasks completed on the first pass were reported. Results Needle guidance technology significantly improved distance, task completion, number of passes, and completion on the first pass compared with CU for both IP and OOP approaches (P ≤ 0.001). Average NGT distance to target was lower by 57.1% overall (n = 560, 1.5 ± 2.4 vs 3.5 ± 3.7 mm), 38.5% IP (n = 140, 1.6 ± 2.6 vs 2.6 ± 2.8 mm), and 68.2% OOP (n = 140, 1.4 ± 2.2 vs 4.4 ± 4.3 mm) (all P ≤ 0.01). Subgroup analyses revealed accuracy gains were largest among target-proximity tasks performed by residents and for OOP approaches. Needle guidance technology improved first-pass completion from 214 (76.4%) per 280 to 249 (88.9%) per 280, a significant improvement of 16.4% (P = 0.001). Conclusions Passive magnetic NGT can improve accuracy of needle procedures, particularly among OOP procedures requiring close approach to sensitive targets, such as nerve blocks in anesthesiology practice. PMID:28079754

  18. On the Accuracy of IGS Orbits

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Ray, J.

    2007-12-01

    In order to explore the reliability of IGS internal orbit accuracy estimates, we have compared the geocentric satellite positions at the midnight epoch between consecutive days for the period since November 5, 2006, when the IGS changed its method of antenna calibration. For each pair of orbits, day "A" has been fitted to the extended CODE orbit model (three position and three velocity parameters plus nine nuisance solar radiation parameters), using the IGS05 Final orbits as psuedo-observations, and extrapolated to epoch 24:00 to compare with the 00:00 epoch from the IGS05 Final orbits of day "B". This yields a time series of orbit repeatability measures, analogous to the classical geodetic test for position determinations. To assess the error introduced by the fitting and extrapolation process, the same procedure has been applied to several days dropping the 23:45 epoch, fitting up to 23:30, extrapolating to 23:45, and comparing with reported positions for 23:45. The test differences range between 0 and 10 mm (mean = 3 mm) per geocentric component with 3D differences of 3 to 10 mm (mean = 6 mm). So, the effect of the orbit fitting-extrapolation process nearly always adds insignificant noise to the day- boundary orbit comparisons. If we compare our average 1D position differences to the official IGS accuracy codes (derived from the internal agreement among combined orbit solutions), root-sum-squared for each pair of days, the actual discontinuities are not well correlated with the expected performance values. If instead the IGS RMS values from the Final combination long-arc analyses (which also use the extended CODE model) are taken as the measure of IGS accuracy, the actual orbit discontinuties are much better represented. This is despite the fact that our day- boundary offsets apply to a single epoch each day and the long-arc analyses consider variations over a day (compared to the satellite dynamics determined over the full week). Our method is not well suited

  19. Precision cosmology, Accuracy cosmology and Statistical cosmology

    NASA Astrophysics Data System (ADS)

    Verde, Licia

    2014-05-01

    The avalanche of data over the past 10-20 years has propelled cosmology into the ``precision era''. The next challenge cosmology has to meet is to enter the era of accuracy. Because of the intrinsic nature of studying the Cosmos and the sheer amount of data available now and coming soon, the only way to meet this challenge is by developing suitable and specific statistical techniques. The road from precision Cosmology to accurate Cosmology goes through statistical Cosmology. I will outline some open challenges and discuss some specific examples.

  20. Guidance accuracy considerations for realtime GPS interferometry

    NASA Technical Reports Server (NTRS)

    Braasch, Michael S.; Van Graas, Frank

    1991-01-01

    During April and May of 1991, the Avionics Engineering Center at Ohio University completed the first set of realtime flight tests of a GPS interferometric attitude and heading determination system. This technique has myriad applications for aircraft and spacecraft guidance and control. However, before these applications can be further developed, a number of guidance accuracy issues must be considered. Among these are: signal derogation due to multipath and shadowing, effects of structural flexures, and system robustness during loss of phase lock. This paper addresses these issues with special emphasis on the information content of the GPS signal, and characterization and mitigation of multipath encountered while in flight.

  1. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  2. High accuracy radiation efficiency measurement techniques

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.

    1981-01-01

    The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.

  3. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  4. Doppler estimation accuracy of linear FM waveforms

    NASA Astrophysics Data System (ADS)

    Daum, F. E.

    The single-pulse Doppler estimation accuracy of an unweighted linear FM waveform is analyzed in detail. Simple formulas are derived that predict that one-sigma Doppler estimation error for realistic radar applications. The effects of multiple target interference and nonlinearlities in the radar measurements are considered. In addition, a practical method to estimate Doppler frequency is presented. This technique uses the phase data after pulse compression, and it limits the effect of multiple target interference. In contrast, the available literature is based on the Cramer-Rao bound for Doppler accuracy, which ignores the effects of nonlinearities, multiple target interference and the question of practical implementation. A simple formula is derived that predicts the region of validity for the Cramer-Rao bound. This formula provides a criterion for minimum signal-to-noise ratio in terms of time-bandwidth product. Finally, an important concept that is demonstrated in this paper is that: the bulk of the Doppler information in a linear FM pulse is encoded in the range sidelobes after pulse compression.

  5. High accuracy electronic material level sensor

    DOEpatents

    McEwan, T.E.

    1997-03-11

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.

  6. High accuracy electronic material level sensor

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.

  7. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  8. Accuracy of flow hoods in residential applications

    SciTech Connect

    Wray, Craig P.; Walker, Iain S.; Sherman, Max H.

    2002-05-01

    To assess whether houses can meet performance expectations, the new practice of residential commissioning will likely use flow hoods to measure supply and return grille airflows in HVAC systems. Depending on hood accuracy, these measurements can be used to determine if individual rooms receive adequate airflow for heating and cooling, to determine flow imbalances between different building spaces, to estimate total air handler flow and supply/return imbalances, and to assess duct air leakage. This paper discusses these flow hood applications and the accuracy requirements in each case. Laboratory tests of several residential flow hoods showed that these hoods can be inadequate to measure flows in residential systems. Potential errors are about 20% to 30% of measured flow, due to poor calibrations, sensitivity to grille flow non-uniformities, and flow changes from added flow resistance. Active flow hoods equipped with measurement devices that are insensitive to grille airflow patterns have an order of magnitude less error, and are more reliable and consistent in most cases. Our tests also show that current calibration procedures for flow hoods do not account for field application problems. As a result, a new standard for flow hood calibration needs to be developed, along with a new measurement standard to address field use of flow hoods. Lastly, field evaluation of a selection of flow hoods showed that it is possible to obtain reasonable results using some flow hoods if the field tests are carefully done, the grilles are appropriate, and grille location does not restrict flow hood placement.

  9. Millimeter accuracy satellites for two color ranging

    NASA Technical Reports Server (NTRS)

    Degnan, John J.

    1993-01-01

    The principal technical challenge in designing a millimeter accuracy satellite to support two color observations at high altitudes is to provide high optical cross-section simultaneously with minimal pulse spreading. In order to address this issue, we provide, a brief review of some fundamental properties of optical retroreflectors when used in spacecraft target arrays, develop a simple model for a spherical geodetic satellite, and use the model to determine some basic design criteria for a new generation of geodetic satellites capable of supporting millimeter accuracy two color laser ranging. We find that increasing the satellite diameter provides: a larger surface area for additional cube mounting thereby leading to higher cross-sections; and makes the satellite surface a better match for the incoming planar phasefront of the laser beam. Restricting the retroreflector field of view (e.g. by recessing it in its holder) limits the target response to the fraction of the satellite surface which best matches the optical phasefront thereby controlling the amount of pulse spreading. In surveying the arrays carried by existing satellites, we find that European STARLETTE and ERS-1 satellites appear to be the best candidates for supporting near term two color experiments in space.

  10. Pedometer accuracy in slow walking older adults

    PubMed Central

    Martin, Jessica B.; Krč, Katarina M.; Mitchell, Emily A.; Eng, Janice J.; Noble, Jeremy W.

    2013-01-01

    The purpose of this study was to determine pedometer accuracy during slow overground walking in older adults (Mean age = 63.6 years). A total of 18 participants (6 males, 12 females) wore 5 different brands of pedometers over 3 pre-set cadences that elicited walking speeds between 0.3 and 0.9 m/s and one self-selected cadence over 80 meters of indoor track. Pedometer accuracy decreased with slower walking speeds with mean percent errors across all devices combined of 56%, 40%, 19% and 9% at cadences of 50, 66, and 80 steps/min, and self selected cadence, respectively. Percent error ranged from 45.3% for Omron HJ105 to 66.9% for Yamax Digiwalker 200. Due to the high level of error across the slowest cadences of all 5 devices, the use of pedometers to monitor step counts in healthy older adults with slower gait speeds is problematic. Further research is required to develop pedometer mechanisms that accurately measure steps at slower walking speeds. PMID:24795762

  11. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  12. Dust trajectory sensor: Accuracy and data analysis

    SciTech Connect

    Xie, J.; Horanyi, M.; Sternovsky, Z.; Gruen, E.; Duncan, N.; Drake, K.; Le, H.; Auer, S.; Srama, R.

    2011-10-15

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Gruen, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Gruen, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1 deg. in direction.

  13. Improvements on the accuracy of beam bugs

    SciTech Connect

    Chen, Y J; Fessenden, T

    1998-09-02

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughout linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.

  14. Improvements on the accuracy of beam bugs

    SciTech Connect

    Chen, Y.J.; Fessenden, T.

    1998-08-17

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughout linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.

  15. Presentation accuracy of Web animation methods.

    PubMed

    Schmidt, W C

    2001-05-01

    Several Web animation methods were independently assessed on fast and slow systems running two popular Web browsers under MacOS and Windows. The methods assessed included those requiring programming (Authorware, Java, Javascript/Jscript), browser extensions (Flash and Authorware), or neither (animated GIF). The number of raster scans that an image in an animation was presented for was counted. This was used as an estimate of the minimum presentation time for the image when the software was set to update the animation as quickly as possible. In a second condition, the image was set to be displayed for 100 msec, and differences between observed and expected presentations were used to assess accuracy. In general, all the methods except Java deteriorated as a function of the speed of the computer system, with the poorest temporal resolutions and greatest variability occurring on slower systems. For some animation methods, poor performance was dependent on browser, operating system, system speed, or combinations of these.

  16. Accuracy of an earpiece face-bow.

    PubMed

    Palik, J F; Nelson, D R; White, J T

    1985-06-01

    The validity of the Hanau ear-bow to transfer an arbitrary hinge axis to a Hanau articulator was clinically compared with a Hanau kinematic face-bow. The study was conducted with 18 randomly selected patients. This investigation demonstrated a significant statistical difference between the arbitrary axis located with an ear-bow and the terminal hinge axis. This discrepancy was significant in the anteroposterior direction but not in the superior-inferior direction. Only 50% of the arbitrary hinge axes were within a 5 mm radius of the terminal hinge axis, while 89% were within a 6 mm radius. Furthermore, the ear-bow method was not repeatable statistically. Additional study is needed to determine the practical value of the arbitrary face-bow and to pursue modifications to improve its accuracy.

  17. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  18. Statistical fitting accuracy in photon correlation spectroscopy

    NASA Technical Reports Server (NTRS)

    Shaumeyer, J. N.; Briggs, Matthew E.; Gammon, Robert W.

    1993-01-01

    Continuing our experimental investigation of the fitting accuracy associated with photon correlation spectroscopy, we collect 150 correlograms of light scattered at 90 deg from a thermostated sample of 91-nm-diameter, polystyrene latex spheres in water. The correlograms are taken with two correlators: one with linearly spaced channels and one with geometrically spaced channels. Decay rates are extracted from the single-exponential correlograms with both nonlinear least-squares fits and second-order cumulant fits. We make several statistical comparisons between the two fitting techniques and verify an earlier result that there is no sample-time dependence in the decay rate errors. We find, however, that the two fitting techniques give decay rates that differ by 1 percent.

  19. Accuracy Estimation in Force Spectroscopy Experiments

    NASA Astrophysics Data System (ADS)

    Rankl, Christian; Kienberger, Ferry; Gruber, Hermann; Blaas, Dieter; Hinterdorfer, Peter

    2007-08-01

    Force spectroscopy is a useful tool for the investigation of molecular interactions. We here present a detailed analysis of parameter estimation in force spectroscopy experiments. It provides the values of the statistical errors of the kinetic off-rate constant koff and the energy length scale xβ to be considered using the single barrier model. As a biologically relevant experimental system we used the interaction between human rhinovirus serotype 2 and a recombinant derivative of the very-low density lipoprotein receptor. The interaction forces of single virus-receptor pairs were measured at different loading rates and analysed according to the single barrier model. Accuracy estimates of koff and xβ were obtained by Monte Carlo simulation and bootstrapping. For this model of virus-receptor attachment, force spectroscopy experiments yielded xβ=(0.38± 0.07) nm and \\ln koff=(-2.3± 1.0)\\ln s-1.

  20. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  1. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  2. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  3. Stereotype accuracy of ballet and modern dancers.

    PubMed

    Clabaugh, Alison; Morling, Beth

    2004-02-01

    The authors recorded preprofessional ballet and modern dancers' perceptions of the personality traits of each type of dancer and self-reports of their own standing, to test the accuracy of the group stereotypes. Participants accurately stereotyped ballet dancers as scoring higher than modern dancers on Fear of Negative Evaluation and Personal Need for Structure and accurately viewed the groups as equal on Fitness Esteem. Participants inaccurately stereotyped ballet dancers as lower on Body Esteem; the groups actually scored the same. Sensitivity correlations across traits indicated that dancers were accurate about the relative magnitudes of trait differences in the two types of dancers. A group of nondancers reported stereotypes that were usually in the right direction although of inaccurate magnitude, and nondancers were sensitive to the relative sizes of group differences across traits.

  4. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  5. Positioning accuracy of the neurotron 1000

    SciTech Connect

    Cox, R.S.; Murphy, M.J.

    1995-12-31

    The Neuotron 1000 is a novel treatment machine under development for frameless stereotaxic radiosurgery that consists of a compact X-band accelerator mounted on a robotic arm. The therapy beam is guided to the lesion by an imaging system, which included two diagnostic x-ray cameras that view the patient during treatment. Patient position and motion are measured by the imaging system and appropriate corrections are communicated in real time to the robotic arm for beam targeting and motion tracking. The three tests reported here measured the pointing accuracy of the therapy beam and the present capability of the imaging guidance system. The positioning and pointing test measured the ability of the robotic arm to direct the beam through a test isocenter from arbitrary arm positions. The test isocenter was marked by a small light-sensitive crystal and the beam axis was simulated by a laser.

  6. Accuracy of the vivofit activity tracker.

    PubMed

    Alsubheen, Sana'a A; George, Amanda M; Baker, Alicia; Rohr, Linda E; Basset, Fabien A

    2016-08-01

    The purpose of this study was to examine the accuracy of the vivofit activity tracker in assessing energy expenditure and step count. Thirteen participants wore the vivofit activity tracker for five days. Participants were required to independently perform 1 h of self-selected activity each day of the study. On day four, participants came to the lab to undergo BMR and a treadmill-walking task (TWT). On day five, participants completed 1 h of office-type activities. BMR values estimated by the vivofit were not significantly different from the values measured through indirect calorimetry (IC). The vivofit significantly underestimated EE for treadmill walking, but responded to the differences in the inclination. Vivofit underestimated step count for level walking but provided an accurate estimate for incline walking. There was a strong correlation between EE and the exercise intensity. The vivofit activity tracker is on par with similar devices and can be used to track physical activity.

  7. Moyamoya disease: diagnostic accuracy of MRI.

    PubMed

    Yamada, I; Suzuki, S; Matsushima, Y

    1995-07-01

    Our purpose was to evaluate the diagnostic accuracy of MRI in moyamoya disease. We studied 30 patients with this disease, comparing MRI and angiographic findings. The diagnostic value of MRI was evaluated for occlusive lesions, collateral vessels, and parenchymal lesions. In all patients bilateral occlusion or stenosis of the supraclinoid internal carotid artery and proximal anterior and middle cerebral arteries was clearly shown by MRI, and staging of the extent of occlusion agreed with angiographic staging in 44 (73%) of 60 arteries. MRI, particularly coronal images, clearly showed basal cerebral moyamoya vessels in 54 hemispheres, and 45 of a total of 71 large leptomeningeal and transdural collateral vessels were identified. MRI also showed parenchymal lesions in 48 (80%) hemispheres, and the extent of occlusion in the anterior and posterior circulations respectively correlated with white matter and cortical and/or subcortical infarcts.

  8. Accuracy of maxillary positioning in bimaxillary surgery.

    PubMed

    Kretschmer, W B; Zoder, W; Baciut, G; Bacuit, Mihaela; Wangerin, K

    2009-09-01

    The aim of the study was to investigate the accuracy of a modified pin system for the vertical control of maxillary repositioning in bimaxillary osteotomies. The preoperative cephalograms of 239 consecutive patients who were to have bimaxillary osteotomies were superimposed on the postoperative films. Planned and observed vertical and horizontal movements of the upper incisor were analysed statistically. The mean deviations of -0.07 mm (95% confidence intervals (CIs) -0.17 to 0.04 mm) for the vertical movement and 0.12 mm (95% CI -0.06 to 0.30 mm) for the horizontal movement did not differ significantly from zero. Comparison of the two variances between intrusion and extrusion of the maxilla did not differ significantly either (p=0.51). These results suggest that the modified pin system for vertical control combined with interocclusal splints provides accurate vertical positioning of the anterior maxilla in orthognathic surgery.

  9. Quantum mechanical calculations to chemical accuracy

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1991-01-01

    The accuracy of current molecular-structure calculations is illustrated with examples of quantum mechanical solutions for chemical problems. Two approaches are considered: (1) the coupled-cluster singles and doubles (CCSD) with a perturbational estimate of the contribution of connected triple excitations, or CCDS(T); and (2) the multireference configuration-interaction (MRCI) approach to the correlation problem. The MRCI approach gains greater applicability by means of size-extensive modifications such as the averaged-coupled pair functional approach. The examples of solutions to chemical problems include those for C-H bond energies, the vibrational frequencies of O3, identifying the ground state of Al2 and Si2, and the Lewis-Rayleigh afterglow and the Hermann IR system of N2. Accurate molecular-wave functions can be derived from a combination of basis-set saturation studies and full configuration-interaction calculations.

  10. Accuracy of the Cloud Integrating Nephelometer

    NASA Technical Reports Server (NTRS)

    Gerber, Hermann E.

    2004-01-01

    Potential error sources for measurements with the Cloud Integrating Nephelometer (CIN) are discussed and analyzed, including systematic errors of the measurement approach, flow and particle-trajectory deviations at flight velocity, ice-crystal breakup on probe surfaces, and errors in calibration and developing scaling constants. It is concluded that errors are minimal, and that the accuracy of the CIN should be close to the systematic behavior of the CIN derived in Gerber et al (2000). Absolute calibration of the CIN with a transmissometer operating co-located in a mountain-top cloud shows that the earlier scaling constant for the optical extinction coefficient obtained by other means is within 5% of the absolute calibration value, and that the CIN measurements on the Citation aircraft flights during the CRYSTAL-FACE study are accurate.

  11. Dosimetric accuracy of a staged radiosurgery treatment

    NASA Astrophysics Data System (ADS)

    Cernica, George; de Boer, Steven F.; Diaz, Aidnag; Fenstermaker, Robert A.; Podgorsak, Matthew B.

    2005-05-01

    For large cerebral arteriovenous malformations (AVMs), the efficacy of radiosurgery is limited since the large doses necessary to produce obliteration may increase the risk of radiation necrosis to unacceptable levels. An alternative is to stage the radiosurgery procedure over multiple stages (usually two), effectively irradiating a smaller volume of the AVM nidus with a therapeutic dose during each session. The difference between coordinate systems defined by sequential stereotactic frame placements can be represented by a translation and a rotation. A unique transformation can be determined based on the coordinates of several fiducial markers fixed to the skull and imaged in each stereotactic coordinate system. Using this transformation matrix, isocentre coordinates from the first stage can be displayed in the coordinate system of subsequent stages allowing computation of a combined dose distribution covering the entire AVM. The accuracy of this approach was tested on an anthropomorphic head phantom and was verified dosimetrically. Subtle defects in the phantom were used as control points, and 2 mm diameter steel balls attached to the surface were used as fiducial markers and reference points. CT images (2 mm thick) were acquired. Using a transformation matrix developed with two frame placements, the predicted locations of control and reference points had an average error of 0.6 mm near the fiducial markers and 1.0 mm near the control points. Dose distributions in a staged treatment approach were accurately calculated using the transformation matrix. This approach is simple, fast and accurate. Errors were small and clinically acceptable for Gamma Knife radiosurgery. Accuracy can be improved by reducing the CT slice thickness.

  12. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  13. Methodology for high accuracy contact angle measurement.

    PubMed

    Kalantarian, A; David, R; Neumann, A W

    2009-12-15

    A new version of axisymmetric drop shape analysis (ADSA) called ADSA-NA (ADSA-no apex) was developed for measuring interfacial properties for drop configurations without an apex. ADSA-NA facilitates contact angle measurements on drops with a capillary protruding into the drop. Thus a much simpler experimental setup, not involving formation of a complete drop from below through a hole in the test surface, may be used. The contact angles of long-chained alkanes on a commercial fluoropolymer, Teflon AF 1600, were measured using the new method. A new numerical scheme was incorporated into the image processing to improve the location of the contact points of the liquid meniscus with the solid substrate to subpixel resolution. The images acquired in the experiments were also analyzed by a different drop shape technique called theoretical image fitting analysis-axisymmetric interfaces (TIFA-AI). The results were compared with literature values obtained by means of the standard ADSA for sessile drops with the apex. Comparison of the results from ADSA-NA with those from TIFA-AI and ADSA reveals that, with different numerical strategies and experimental setups, contact angles can be measured with an accuracy of less than 0.2 degrees. Contact angles and surface tensions measured from drops with no apex, i.e., by means of ADSA-NA and TIFA-AI, were considerably less scattered than those from complete drops with apex. ADSA-NA was also used to explore sources of improvement in contact angle resolution. It was found that using an accurate value of surface tension as an input enhances the accuracy of contact angle measurements.

  14. Improving the accuracy of death certification

    PubMed Central

    Myers, K A; Farquhar, D R

    1998-01-01

    BACKGROUND: Population-based mortality statistics are derived from the information recorded on death certificates. This information is used for many important purposes, such as the development of public health programs and the allocation of health care resources. Although most physicians are confronted with the task of completing death certificates, many do not receive adequate training in this skill. Resulting inaccuracies in information undermine the quality of the data derived from death certificates. METHODS: An educational intervention was designed and implemented to improve internal medicine residents' accuracy in death certificate completion. A total of 229 death certificates (146 completed before and 83 completed after the intervention) were audited for major and minor errors, and the rates of errors before and after the intervention were compared. RESULTS: Major errors were identified on 32.9% of the death certificates completed before the intervention, a rate comparable to previously reported rates for internal medicine services in teaching hospitals. Following the intervention the major error rate decreased to 15.7% (p = 0.01). The reduction in the major error rate was accounted for by significant reductions in the rate of listing of mechanism of death without a legitimate underlying cause of death (15.8% v. 4.8%) (p = 0.01) and the rate of improper sequencing of death certificate information (15.8% v. 6.0%) (p = 0.03). INTERPRETATION: Errors are common in the completion of death certificates in the inpatient teaching hospital setting. The accuracy of death certification can be improved with the implementation of a simple educational intervention. PMID:9614825

  15. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    ERIC Educational Resources Information Center

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  16. 40 CFR 86.1338-2007 - Emission measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-2007 Emission measurement accuracy. (a) Minimum limit. (1) The minimum limit of an... measurement must be made to ensure the accuracy of the calibration curve to within ±2 percent of...

  17. 40 CFR 86.1338-2007 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-2007 Emission measurement accuracy. (a) Minimum limit. (1) The minimum limit of an... measurement must be made to ensure the accuracy of the calibration curve to within ±2 percent of...

  18. Accuracy of GFR Estimation in Obese Patients

    PubMed Central

    Guebre-Egziabher, Fitsum; Sens, Florence; Nguyen-Tu, Marie-Sophie; Juillard, Laurent; Dubourg, Laurence; Hadj-Aissa, Aoumeur

    2014-01-01

    Background and objectives Adequate estimation of renal function in obese patients is essential for the classification of patients in CKD category as well as the dose adjustment of drugs. However, the body size descriptor for GFR indexation is still debatable, and formulas are not validated in patients with extreme variations of weight. Design, setting, participants, & measurements This study included 209 stages 1–5 CKD obese patients referred to the Department of Renal Function Study at the University Hospital in Lyon between 2010 and 2013 because of suspected renal dysfunction. GFR was estimated with the Chronic Kidney Disease and Epidemiology equation (CKD-EPI) and measured with a gold standard method (inulin or iohexol) not indexed (mGFR) or indexed to body surface area determined by the Dubois and Dubois formula with either real (mGFRr) or ideal (mGFRi) body weight. Mean bias (eGFR−mGFR), precision, and accuracy of mGFR were compared with the results obtained for nonobese participants (body mass index between 18.5 and 24.9) who had a GFR measurement during the same period of time. Results Mean mGFRr (51.6±24.2 ml/min per 1.73 m2) was significantly lower than mGFR, mGFRi, and eGFRCKD-EPI. eGFRCKD-EPI had less bias with mGFR (0.29; −1.7 to 2.3) and mGFRi (−1.62; −3.1 to 0.45) compared with mGFRr (8.7; 7 to 10). This result was confirmed with better accuracy for the whole cohort (78% for mGFR, 84% for mGFRi, and 72% for mGFRr) and participants with CKD stages 3–5. Moreover, the Bland Altman plot showed better agreement between mGFR and eGFRCKD-EPI. The bias between eGFRCKD-EPI and mGFRr was greater in obese than nonobese participants (8.7 versus 0.58, P<0.001). Conclusions This study shows that, in obese CKD patients, the performance of eGFRCKD-EPI is good for GFR≤60 ml/min per 1.73 m2. Indexation of mGFR with body surface area using ideal body weight gives less bias than mGFR scaled with body surface area using real body weight. PMID:24482068

  19. Time and position accuracy using codeless GPS

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Jefferson, D. C.; Lichten, S. M.; Thomas, J. B.; Vigue, Y.; Young, L. E.

    1994-01-01

    The Global Positioning System has allowed scientists and engineers to make measurements having accuracy far beyond the original 15 meter goal of the system. Using global networks of P-Code capable receivers and extensive post-processing, geodesists have achieved baseline precision of a few parts per billion, and clock offsets have been measured at the nanosecond level over intercontinental distances. A cloud hangs over this picture, however. The Department of Defense plans to encrypt the P-Code (called Anti-Spoofing, or AS) in the fall of 1993. After this event, geodetic and time measurements will have to be made using codeless GPS receivers. However, there appears to be a silver lining to the cloud. In response to the anticipated encryption of the P-Code, the geodetic and GPS receiver community has developed some remarkably effective means of coping with AS without classified information. We will discuss various codeless techniques currently available and the data noise resulting from each. We will review some geodetic results obtained using only codeless data, and discuss the implications for time measurements. Finally, we will present the status of GPS research at JPL in relation to codeless clock measurements.

  20. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  1. [Accuracy of a pulse oximeter during hypoxia].

    PubMed

    Tachibana, C; Fukada, T; Hasegawa, R; Satoh, K; Furuya, Y; Ohe, Y

    1996-04-01

    The accuracy of the pulse oximeter was examined in hypoxic patients. We studied 11 cyanotic congenital heart disease patients during surgery, and compared the arterial oxygen saturation determined by both the simultaneous blood gas analysis (CIBA-CORNING 288 BLOOD GAS SYSTEM, SaO2) and by the pulse oximeter (DATEX SATELITE, with finger probe, SpO2). Ninty sets of data on SpO2 and SaO2 were obtained. The bias (SpO2-SaO2) was 1.7 +/- 6.9 (mean +/- SD) %. In cyanotic congenital heart disease patients, SpO2 values were significantly higher than SaO2. Although the reason is unknown, in constantly hypoxic patients, SpO2 values are possibly over-estimated. In particular, pulse oximetry at low levels of saturation (SaO2 below 80%) was not as accurate as at a higher saturation level (SaO2 over 80%). There was a positive correlation between SpO2 and SaO2 (linear regression analysis yields the equation y = 0.68x + 26.0, r = 0.93). In conclusion, the pulse oximeter is useful to monitor oxygen saturation in constantly hypoxic patients, but the values thus obtained should be compared with the values measured directly when hypoxemia is severe.

  2. Accuracy Improvement for Predicting Parkinson's Disease Progression.

    PubMed

    Nilashi, Mehrbakhsh; Ibrahim, Othman; Ahani, Ali

    2016-09-30

    Parkinson's disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson's datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease.

  3. Accuracy of clinical diagnosis in knee arthroscopy.

    PubMed Central

    Brooks, Stuart; Morgan, Mamdouh

    2002-01-01

    A prospective study of 238 patients was performed in a district general hospital to assess current diagnostic accuracy rates and to ascertain the use and the effectiveness of magnetic resonance imaging (MRI) scanning in reducing the number of negative arthroscopies. The pre-operative diagnosis of patients listed for knee arthroscopy was medial meniscus tear 94 (40%) and osteoarthritis 59 (25%). MRI scans were requested in 57 patients (24%) with medial meniscus tear representing 65% (37 patients). The correlation study was done between pre-operative diagnosis, MRI and arthroscopic diagnosis. Clinical diagnosis was as accurate as the MRI with 79% agreement between the preoperative diagnosis and arthroscopy compared to 77% agreement between MRI scan and arthroscopy. There was no evidence, in this study, that MRI scan can reduce the number of negative arthroscopies. Four normal MRI scans had positive arthroscopic diagnosis (two torn medial meniscus, one torn lateral meniscus and one chondromalacia patella). Out of 240 arthroscopies, there were only 10 normal knees (negative arthroscopy) representing 4% of the total number of knee arthroscopies; one patient of those 10 cases had MRI scan with ACL rupture diagnosis. Images Figure 1 Figure 2 PMID:12215031

  4. Firing temperature accuracy of four dental furnaces.

    PubMed

    Haag, Per; Ciber, Edina; Dérand, Tore

    2011-01-01

    In spite of using recommended firing and displayed temperatures, low-fired dental porcelain more often demonstrates unsatisfactory results after firing than porcelain fired at higher temperatures. It could therefore be anticipated that temperatures shown on the display are incorrect, implying that the furnace does not render correct firing programs for low-fired porcelain. The purpose of this study is to investigate deviations from the real temperature during the firing process and also to illustrate the service and maintenance discipline of furnaces at dental laboratories. Totally 20 units of four different types of dental furnaces were selected for testing of temperature accuracy with usage of a digital temperature measurement apparatus, Therma 1. In addition,the staffs at 68 dental laboratories in Sweden were contacted for a telephone interview on furnace brand and on service and maintenance program performed at their laboratories. None of the 20 different dental furnaces in the study could generate the firing temperatures shown on the display, indicating that the hypothesis was correct. Multimat MCII had the least deviation of temperature compared with displayfigures. 62 out of 68 invited dental laboratories chose to participate in the interviews and the result was that very few laboratories had a service and maintenance program living up to quality standards. There is room for improving the precision of dental porcelain furnaces as there are deviations between displayed and read temperatures during the different steps of the firing process.

  5. Classification accuracy of actuarial risk assessment instruments.

    PubMed

    Neller, Daniel J; Frederick, Richard I

    2013-01-01

    Users of commonly employed actuarial risk assessment instruments (ARAIs) hope to generate numerical probability statements about risk; however, ARAI manuals often do not explicitly report data that are essential for understanding the classification accuracy of the instruments. In addition, ARAI manuals often contain data that have the potential for misinterpretation. The authors of the present article address the accurate generation of probability statements. First, they illustrate how the reporting of numerical probability statements based on proportions rather than predictive values can mislead users of ARAIs. Next, they report essential test characteristics that, to date, have gone largely unreported in ARAI manuals. Then they discuss a graphing method that can enhance the practice of clinicians who communicate risk via numerical probability statements. After the authors review several strategies for selecting optimal cut-off scores, they show how the graphing method can be used to estimate positive predictive values for each cut-off score of commonly used ARAIs, across all possible base rates. They also show how the graphing method can be used to estimate base rates of violent recidivism in local samples.

  6. Analysis of initial orbit determination accuracy

    NASA Astrophysics Data System (ADS)

    Vananti, Alessandro; Schildknecht, Thomas

    The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for orbital debris. The debris objects are discovered during systematic survey observations. In general only a short observation arc, or tracklet, is available for most of these objects. From this discovery tracklet a first orbit determination is computed in order to be able to find the object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In this paper, the accuracy of the initial orbit determination is analyzed. This depends on a number of factors: tracklet length, number of observations, type of orbit, astrometric error, and observation geometry. The latter is characterized by both the position of the object along its orbit and the location of the observing station. Different positions involve different distances from the target object and a different observing angle with respect to its orbital plane and trajectory. The present analysis aims at optimizing the geometry of the discovery observations depending on the considered orbit.

  7. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  8. High accuracy in situ radiometric mapping.

    PubMed

    Tyler, Andrew N

    2004-01-01

    In situ and airborne gamma ray spectrometry have been shown to provide rapid and spatially representative estimates of environmental radioactivity across a range of landscapes. However, one of the principal limitations of this technique has been the influence of changes in the vertical distribution of the source (e.g. 137Cs) on the observed photon fluence resulting in a significant reduction in the accuracy of the in situ activity measurement. A flexible approach for single gamma photon emitting radionuclides is presented, which relies on the quantification of forward scattering (or valley region between the full energy peak and Compton edge) within the gamma ray spectrum to compensate for changes in the 137Cs vertical activity distribution. This novel in situ method lends itself to the mapping of activity concentrations in environments that exhibit systematic changes in the vertical activity distribution. The robustness of this approach has been demonstrated in a salt marsh environment on the Solway coast, SW Scotland, with both a 7.6 cm x 7.6 cm NaI(Tl) detector and a 35% n-type HPGe detector. Application to ploughed field environments has also been demonstrated using HPGe detector, including its application to the estimation of field moist bulk density and soil erosion measurement. Ongoing research work is also outlined.

  9. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  10. Accuracy of 3-D reconstruction with occlusions.

    PubMed

    Begon, Mickaël; Lacouture, Patrick

    2010-02-01

    A marker has to be seen by at least two cameras for its three-dimensional (3-D) reconstruction, and the accuracy can be improved with more cameras. However, a change in the set of cameras used in the reconstruction can alter the kinematics. The purpose of this study was to quantify the harmful effect of occlusions on two-dimensional (2-D) images and to make recommendations about the signal processing. A reference kinematics data set was collected for a three degree-of-freedom linkage with three cameras of a commercial motion analysis system without any occlusion on the 2-D images. In the 2-D images, some occlusions were artificially created based on trials of real cyclic motions. An interpolation of 2-D trajectories before the 3-D reconstruction and two filters (Savitsky-Golay and Butterworth filters) after reconstruction were successively applied to minimize the effect of the 2-D occlusions. The filter parameters were optimized by minimizing the root mean square error between the reference and the filtered data. The optimal parameters of the filters were marker dependent, whereas no filter was necessary after a 2-D interpolation. As the occlusions cause systematic error in the 3-D reconstruction, the interpolation of the 2-D trajectories is more appropriate than filtering the 3-D trajectories.

  11. The accuracy of dynamic attitude propagation

    NASA Technical Reports Server (NTRS)

    Harvie, E.; Chu, D.; Woodard, M.

    1990-01-01

    Propagating attitude by integrating Euler's equation for rigid body motion has long been suggested for the Earth Radiation Budget Satellite (ERBS) but until now has not been implemented. Because of limited Sun visibility, propagation is necessary for yaw determination. With the deterioration of the gyros, dynamic propagation has become more attractive. Angular rates are derived from integrating Euler's equation with a stepsize of 1 second, using torques computed from telemetered control system data. The environmental torque model was quite basic. It included gravity gradient and unshadowed aerodynamic torques. Knowledge of control torques is critical to the accuracy of dynamic modeling. Due to their coarseness and sparsity, control actuator telemetry were smoothed before integration. The dynamic model was incorporated into existing ERBS attitude determination software. Modeled rates were then used for attitude propagation in the standard ERBS fine-attitude algorithm. In spite of the simplicity of the approach, the dynamically propagated attitude matched the attitude propagated with good gyros well for roll and yaw but diverged up to 3 degrees for pitch because of the very low resolution in pitch momentum wheel telemetry. When control anomalies significantly perturb the nominal attitude, the effect of telemetry granularity is reduced and the dynamically propagated attitudes are accurate on all three axes.

  12. Dimensional accuracy of thermoformed polymethyl methacrylate.

    PubMed

    Jagger, R G

    1996-12-01

    Thermoforming of polymethyl methacrylate sheet is used to produce a number of different types of dental appliances. The purpose of this study was to determine the dimensional accuracy of thermoformed polymethyl methacrylate specimens. Five blanks of the acrylic resin were thermoformed on stone casts prepared from a silicone mold of a brass master die. The distances between index marks were measured both on the cast and on the thermoformed blanks with an optical comparator. Measurements on the blanks were made again 24 hours after processing and then 1 week, 1 month, and 3 months after immersion in water. Linear shrinkage of less than 1% (range 0.37% to 0.52%) was observed 24 hours after removal of the blanks from the cast. Immersion of the thermoformed specimens in water resulted in an increase in measured dimensions, but after 3 months' immersion these increases were still less than those of the cast (range 0.07% to 0.18%). It was concluded that it is possible to thermoform Perspex polymethyl methacrylate accurately.

  13. Effect of data latency upon missile accuracy

    NASA Astrophysics Data System (ADS)

    Monroe, L. J.

    1983-12-01

    This study examined the effect of data latency upon air-to-air guided missile accuracy. This research was done by modeling a digital guided missile, inserting the model into a computer simulation and generating miss distance statistics. The digital guided missile was modeled after the DIS microcomputer architecture. The DIS (Digital Integrating Subsystem) approach involves a number of loosely coupled microprocessors which communicate over a serial multiplex bus. It was developed at the Air Force Armament Lab., Eglin AFB, FL. The missile simulation, Tactics IV, involves three degrees of freedom and is written in FORTRAN IV. It was developed by Science Applications, Inc. in conjunction with AFWAL/FIMB, Wright Patterson AFB, OH. The results of this study indicate that typical data latency values generate only small increases in miss distance. The maximum delays tested were .01 seconds and the average increase in miss distance was 2.12 feet. Additionally, it was discovered that the transmission rate of the DIS microcomputers greatly affected miss distance. Microcomputers transmitting at 10 HZ generated large miss distances, even without data latency present. The identical missile engagements using transmission rates of 100 HZ resulted in much smaller miss distances.

  14. Accuracy of schemes with nonuniform meshes for compressible fluid flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1985-01-01

    The accuracy of the space discretization for time-dependent problems when a nonuniform mesh is used is considered. Many schemes reduce to first-order accuracy while a popular finite volume scheme is even inconsistent for general grids. This accuracy is based on physical variables. However, when accuracy is measured in computational variables then second-order accuracy can be obtained. This is meaningful only if the mesh accurately reflects the properties of the solution. In addition, the stability properties of some improved accurate schemes are analyzed and it can be shown that they also allow for larger time steps when Runge-Kutta type methods are used to advance in time.

  15. Effect of atmospherics on beamforming accuracy

    NASA Technical Reports Server (NTRS)

    Alexander, Richard M.

    1990-01-01

    Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as

  16. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  17. Diagnosis of diabetes diseases using an Artificial Immune Recognition System2 (AIRS2) with fuzzy K-nearest neighbor.

    PubMed

    Chikh, Mohamed Amine; Saidi, Meryem; Settouti, Nesma

    2012-10-01

    The use of expert systems and artificial intelligence techniques in disease diagnosis has been increasing gradually. Artificial Immune Recognition System (AIRS) is one of the methods used in medical classification problems. AIRS2 is a more efficient version of the AIRS algorithm. In this paper, we used a modified AIRS2 called MAIRS2 where we replace the K- nearest neighbors algorithm with the fuzzy K-nearest neighbors to improve the diagnostic accuracy of diabetes diseases. The diabetes disease dataset used in our work is retrieved from UCI machine learning repository. The performances of the AIRS2 and MAIRS2 are evaluated regarding classification accuracy, sensitivity and specificity values. The highest classification accuracy obtained when applying the AIRS2 and MAIRS2 using 10-fold cross-validation was, respectively 82.69% and 89.10%.

  18. Multilevel ensemble model for prediction of IgA and IgG antibodies.

    PubMed

    Khanna, Divya; Rana, Prashant Singh

    2017-04-01

    Identification of antigen for inducing specific class of antibody is prime objective in peptide based vaccine designs, immunodiagnosis, and antibody productions. It's urge to introduce a reliable system with high accuracy and efficiency for prediction. In the present study, a novel multilevel ensemble model is developed for prediction of antibodies IgG and IgA. Epitope length is important in training the model and it is efficient to use variable length of epitopes. In this ensemble approach, seven different machine learning models are combined to predict variable length of epitopes (4 to 50). The proposed model of IgG specific epitopes achieves 94.43% of accuracy and IgA specific epitopes achieves 97.56% of accuracy with repeated 10-fold cross validation. The proposed model is compared with the existing system i.e. IgPred model and outcome of proposed model is improved.

  19. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  20. Resist development modeling for OPC accuracy improvement

    NASA Astrophysics Data System (ADS)

    Fan, Yongfa; Zavyalova, Lena; Zhang, Yunqiang; Zhang, Charlie; Lucas, Kevin; Falch, Brad; Croffie, Ebo; Li, Jianliang; Melvin, Lawrence; Ward, Brian

    2009-03-01

    in the same way that current model calibration is done. The method is validated with a rigorous lithography process simulation tool which is based on physical models to simulate and predict effects during the resist PEB and development process. Furthermore, an experimental lithographic process was modeled using this new methodology, showing significant improvement in modeling accuracy in compassion to a traditional model. Layout correction test has shown that the new model form is equivalent to traditional model forms in terms of correction convergence and speed.

  1. Multisensor Arrays for Greater Reliability and Accuracy

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff

    2004-01-01

    Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that

  2. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  3. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  4. Carcinogenicity prediction of noncongeneric chemicals by a support vector machine.

    PubMed

    Zhong, Min; Nie, Xianglei; Yan, Aixia; Yuan, Qipeng

    2013-05-20

    The ability to identify carcinogenic compounds is of fundamental importance to the safe application of chemicals. In this study, we generated an array of in silico models allowing the classification of compounds into carcinogenic and noncarcinogenic agents based on a data set of 852 noncongeneric chemicals collected from the Carcinogenic Potency Database (CPDBAS). Twenty-four molecular descriptors were selected by Pearson correlation, F-score, and stepwise regression analysis. These descriptors cover a range of physicochemical properties, including electrophilicity, geometry, molecular weight, size, and solubility. The descriptor mutagenic showed the highest correlation coefficient with carcinogenicity. On the basis of these descriptors, a support vector machine-based (SVM) classification model was developed and fine-tuned by a 10-fold cross-validation approach. Both the SVM model (Model A1) and the best model from the 10-fold cross-validation (Model B3) runs gave good results on the test set with prediction accuracy over 80%, sensitivity over 76%, and specificity over 82%. In addition, extended connectivity fingerprints (ECFPs) and the Toxtree software were used to analyze the functional groups and substructures linked to carcinogenicity. It was found that the results of both methods are in good agreement.

  5. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  6. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  7. Thematic Accuracy Assessment of the 2011 National Land ...

    EPA Pesticide Factsheets

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l

  8. Sound source localization identification accuracy: Level and duration dependencies.

    PubMed

    Yost, William A

    2016-07-01

    Sound source localization accuracy for noises was measured for sources in the front azimuthal open field mainly as a function of overall noise level and duration. An identification procedure was used in which listeners identify which loudspeakers presented a sound. Noises were filtered and differed in bandwidth and center frequency. Sound source localization accuracy depended on the bandwidth of the stimuli, and for the narrow bandwidths, accuracy depended on the filter's center frequency. Sound source localization accuracy did not depend on overall level or duration.

  9. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  10. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  11. Optimized diagnostic model combination for improving diagnostic accuracy

    NASA Astrophysics Data System (ADS)

    Kunche, S.; Chen, C.; Pecht, M. G.

    Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed

  12. Accuracy Test of Microsoft Kinect for Human Morphologic Measurements

    NASA Astrophysics Data System (ADS)

    Molnár, B.; Toth, C. K.; Detrekői, A.

    2012-08-01

    The Microsoft Kinect sensor, a popular gaming console, is widely used in a large number of applications, including close-range 3D measurements. This low-end device is rather inexpensive compared to similar active imaging systems. The Kinect sensors include an RGB camera, an IR projector, an IR camera and an audio unit. The human morphologic measurements require high accuracy with fast data acquisition rate. To achieve the highest accuracy, the depth sensor and the RGB camera should be calibrated and co-registered to achieve high-quality 3D point cloud as well as optical imagery. Since this is a low-end sensor, developed for different purpose, the accuracy could be critical for 3D measurement-based applications. Therefore, two types of accuracy test are performed: (1) for describing the absolute accuracy, the ranging accuracy of the device in the range of 0.4 to 15 m should be estimated, and (2) the relative accuracy of points depending on the range should be characterized. For the accuracy investigation, a test field was created with two spheres, while the relative accuracy is described by sphere fitting performance and the distance estimation between the sphere center points. Some other factors can be also considered, such as the angle of incidence or the material used in these tests. The non-ambiguity range of the sensor is from 0.3 to 4 m, but, based on our experiences, it can be extended up to 20 m. Obviously, this methodology raises some accuracy issues which make accuracy testing really important.

  13. Modeling individual differences in response time and accuracy in numeracy.

    PubMed

    Ratcliff, Roger; Thompson, Clarissa A; McKoon, Gail

    2015-04-01

    In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy

  14. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  15. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  16. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  17. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  18. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  19. Reliability and Accuracy of Surgical Resident Peer Ratings.

    ERIC Educational Resources Information Center

    Lutsky, Larry A.; And Others

    1993-01-01

    Reliability and accuracy of peer ratings by 32, 28, 33 general surgery residents over 3 years were examined. Peer ratings were found highly reliable, with high level of test-retest reliability replicated across three years. Halo effects appear to pose greatest threat to rater accuracy, though chief residents tended to exhibit less halo effect than…

  20. 10 CFR 55.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 55.9 Section 55.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) OPERATORS' LICENSES General Provisions § 55.9 Completeness and accuracy of information. Information provided to the Commission by an applicant for a...

  1. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  2. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  3. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  4. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  5. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Emission measurement accuracy. 92.127 Section 92.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample...

  6. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  7. Understanding the Delayed-Keyword Effect on Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Dunlosky, John; Griffin, Thomas D.; Wiley, Jennifer

    2005-01-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M.…

  8. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  9. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  10. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  11. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Measurement, accuracy, and reliability... Monitors § 74.8 Measurement, accuracy, and reliability requirements. (a) Breathing zone measurement... demonstrates the following: (1) For full-shift measurements of 8 hours or more, a 95 percent confidence...

  12. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Emission measurement accuracy. 92.127 Section 92.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample...

  13. 40 CFR 86.1338-84 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-84 Emission measurement accuracy. (a) Measurement accuracy—Bag sampling. (1) Good... using the calibration data obtained with both calibration gases. (b) Measurement...

  14. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  15. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  16. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  17. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  18. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  19. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity... shall identify any methodologies used and shall make explicit reference by footnote to the...

  20. Estimating Consistency and Accuracy Indices for Multiple Classifications

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Hanson, Bradley A.; Brennan, Robert L.

    2002-01-01

    This article describes procedures for estimating various indices of classification consistency and accuracy for multiple category classifications using data from a single test administration. The estimates of the classification consistency and accuracy indices are compared under three different psychometric models: the two-parameter beta binomial,…

  1. EFFECTS OF LANDSCAPE CHARACTERISTICS ON LAND-COVER CLASS ACCURACY

    EPA Science Inventory



    Utilizing land-cover data gathered as part of the National Land-Cover Data (NLCD) set accuracy assessment, several logistic regression models were formulated to analyze the effects of patch size and land-cover heterogeneity on classification accuracy. Specific land-cover ...

  2. An overview of systematic reviews of diagnostic tests accuracy.

    PubMed

    Bae, Jong-Myon

    2014-01-01

    The Cochrane Collaboration says that the Cochrane handbook for diagnostic test accuracy reviews (DTAR) is currently in development as per the Cochrane Collaboration. This implies that the methodology of systematic reviews (SR) of diagnostic test accuracy is still a matter of debate. At this point, comparison of methodologies for SR in case of interventions as against diagnostics would be helpful to understand DTAR.

  3. Investigating the Accuracy of Teachers' Word Frequency Intuitions

    ERIC Educational Resources Information Center

    McCrostie, James

    2007-01-01

    Previous research has found that native English speakers can judge, with a relatively high degree of accuracy, the frequency of words in the English language. However, there has been little investigation of the ability to judge the frequency of high and middle frequency words. Similarly, the accuracy of EFL teachers' frequency judgements remains…

  4. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  5. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  6. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  7. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  8. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  9. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  10. Accuracy of thick-target micro-PIXE analysis

    NASA Astrophysics Data System (ADS)

    Campbell, J. L.; Teesdale, W. J.; Wang, J.-X.

    1990-04-01

    The accuracy attainable in micro-PIXE analysis is assessed in terms of the X-ray production model and its assumptions, physical realities of the specimen, the necessary data base, and techniques of standardization. NTIS reference materials are analyzed to provide the experimental tests of accuracy.

  11. 10 CFR 55.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 55.9 Section 55.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) OPERATORS' LICENSES General Provisions § 55.9 Completeness and accuracy of information. Information provided to the Commission by an applicant for a...

  12. Effects of Prolonged Work on Data Entry Speed and Accuracy

    ERIC Educational Resources Information Center

    Healy,Alice F.; Kole,James A.; Buck-Gengle,Carolyn J.; Bourne,Lyle E.

    2004-01-01

    In 2 experiments, participants used a keyboard to enter 4-digit numbers presented on a computer monitor under conditions promoting fatigue. In Experiment 1, accuracy of data entry declined but response times improved over time, reflecting an increasing speed-accuracy trade-off. In Experiment 2, the (largely cognitive) time to enter the initial…

  13. Dissociating Appraisals of Accuracy and Recollection in Autobiographical Remembering

    ERIC Educational Resources Information Center

    Scoboria, Alan; Pascal, Lisa

    2016-01-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally…

  14. Camera Calibration Accuracy at Different Uav Flying Heights

    NASA Astrophysics Data System (ADS)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  15. Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking

    ERIC Educational Resources Information Center

    Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas

    2012-01-01

    This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…

  16. Prediction of Rate Constants for Catalytic Reactions with Chemical Accuracy.

    PubMed

    Catlow, C Richard A

    2016-08-01

    Ex machina: A computational method for predicting rate constants for reactions within microporous zeolite catalysts with chemical accuracy has recently been reported. A key feature of this method is a stepwise QM/MM approach that allows accuracy to be achieved while using realistic models with accessible computer resources.

  17. A consensus on protein structure accuracy in NMR?

    PubMed

    Billeter, Martin

    2015-02-03

    The precision of an NMR structure may be manipulated by calculation parameters such as calibration factors. Its accuracy is, however, a different issue. In this issue of Structure, Buchner and Güntert present "consensus structure bundles," where precision analysis allows estimation of accuracy.

  18. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  19. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  20. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  1. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  2. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  3. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  4. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  5. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  6. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  7. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  8. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  9. A Probability Model of Accuracy in Deception Detection Experiments.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Levine, Timothy R.

    2001-01-01

    Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…

  10. Accuracy of References in Ten Library Science Journals.

    ERIC Educational Resources Information Center

    Pope, Nancy N.

    1992-01-01

    A study of 100 article citations from 11 library science journals showed only 45 article citations that were completely free of errors, while 11 had major errors--i.e., errors preventing or hindering location of the reference--and the remaining 44 had minor errors. Citation accuracy in library science journals appears similar to accuracy in other…

  11. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  12. Improving metacomprehension accuracy in an undergraduate course context.

    PubMed

    Wiley, Jennifer; Griffin, Thomas D; Jaeger, Allison J; Jarosz, Andrew F; Cushen, Patrick J; Thiede, Keith W

    2016-12-01

    Students tend to have poor metacomprehension when learning from text, meaning they are not able to distinguish between what they have understood well and what they have not. Although there are a good number of studies that have explored comprehension monitoring accuracy in laboratory experiments, fewer studies have explored this in authentic course contexts. This study investigated the effect of an instructional condition that encouraged comprehension-test-expectancy and self-explanation during study on metacomprehension accuracy in the context of an undergraduate course in research methods. Results indicated that when students received this instructional condition, relative metacomprehension accuracy was better than in a comparison condition. In addition, differences were also seen in absolute metacomprehension accuracy measures, strategic study behaviors, and learning outcomes. The results of the current study demonstrate that a condition that has improved relative metacomprehension accuracy in laboratory contexts may have value in real classroom contexts as well. (PsycINFO Database Record

  13. Feasibility and accuracy of medication checks via Internet video.

    PubMed

    Bradford, Natalie; Armfield, Nigel R; Young, Jeanine; Smith, Anthony C

    2012-04-01

    We investigated the feasibility and accuracy of using Internet-based videoconferencing for double-checking medications. Ten participants checked 30 different medications using a desktop PC and a webcam. The accuracy of the video-based checks was compared with 'face-to-vial' checks. The checks included the drug name, dosage and expiry dates of ampoules, vials and tablets, as well as graduations on syringes. There was 100% accuracy for drug name, dosage, and graduations on syringes greater than 1 unit. The expiry dates proved more difficult to read, and accuracy was only 63%. The mean overall accuracy was 91% for all items. Internet video-based medication double-checks may have a useful role to play in processes to ensure the safe use of medications in home care.

  14. Measuring the spatial accuracy of the spatial scan statistic.

    PubMed

    Read, Simon; Bath, Peter; Willett, Peter; Maheswaran, Ravi

    2011-06-01

    The spatial scan statistic is well established in spatial epidemiology. However, studies of its spatial accuracy are infrequent and vary in approach, often using multiple measures which complicate the objective ranking of different implementations of the statistic. We address this with three novel contributions. Firstly, a modular framework into which different definitions of spatial accuracy can be compared and hybridised. Secondly, we derive a new single measure, Ω, which takes account of all true and detected clusters, without the need for arbitrary weightings and irrespective of any chosen significance threshold. Thirdly, we demonstrate the new measure, alongside existing ones, in a study of the six output filter options provided by SaTScan™. The study suggests filtering overlapping detected clusters tends to reduce spatial accuracy, and visualising overlapping clusters may be better than filtering them out. Although we only address spatial accuracy, the framework and Ω may be extendible to spatio-temporal accuracy.

  15. Accuracy testing of electric groundwater-level measurement tapes

    USGS Publications Warehouse

    Jelinski, Jim; Clayton, Christopher S.; Fulford, Janice M.

    2015-01-01

    The accuracy tests demonstrated that none of the electric-tape models tested consistently met the suggested USGS accuracy of ±0.01 ft. The test data show that the tape models in the study should give a water-level measurement that is accurate to roughly ±0.05 ft per 100 ft without additional calibration. To meet USGS accuracy guidelines, the electric-tape models tested will need to be individually calibrated. Specific conductance also plays a part in tape accuracy. The probes will not work in water with specific conductance values near zero, and the accuracy of one probe was unreliable in very high conductivity water (10,000 microsiemens per centimeter).

  16. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  17. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  18. Accuracy of endoscopic ultrasonography for diagnosing ulcerative early gastric cancers

    PubMed Central

    Park, Jin-Seok; Kim, Hyungkil; Bang, Byongwook; Kwon, Kyesook; Shin, Youngwoon

    2016-01-01

    Abstract Although endoscopic ultrasonography (EUS) is the first-choice imaging modality for predicting the invasion depth of early gastric cancer (EGC), the prediction accuracy of EUS is significantly decreased when EGC is combined with ulceration. The aim of present study was to compare the accuracy of EUS and conventional endoscopy (CE) for determining the depth of EGC. In addition, the various clinic-pathologic factors affecting the diagnostic accuracy of EUS, with a particular focus on endoscopic ulcer shapes, were evaluated. We retrospectively reviewed data from 236 consecutive patients with ulcerative EGC. All patients underwent EUS for estimating tumor invasion depth, followed by either curative surgery or endoscopic treatment. The diagnostic accuracy of EUS and CE was evaluated by comparing the final histologic result of resected specimen. The correlation between accuracy of EUS and characteristics of EGC (tumor size, histology, location in stomach, tumor invasion depth, and endoscopic ulcer shapes) was analyzed. Endoscopic ulcer shapes were classified into 3 groups: definite ulcer, superficial ulcer, and ill-defined ulcer. The overall accuracy of EUS and CE for predicting the invasion depth in ulcerative EGC was 68.6% and 55.5%, respectively. Of the 236 patients, 36 patients were classified as definite ulcers, 98 were superficial ulcers, and 102 were ill-defined ulcers, In univariate analysis, EUS accuracy was associated with invasion depth (P = 0.023), tumor size (P = 0.034), and endoscopic ulcer shapes (P = 0.001). In multivariate analysis, there is a significant association between superficial ulcer in CE and EUS accuracy (odds ratio: 2.977; 95% confidence interval: 1.255–7.064; P = 0.013). The accuracy of EUS for determining tumor invasion depth in ulcerative EGC was superior to that of CE. In addition, ulcer shape was an important factor that affected EUS accuracy. PMID:27472672

  19. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The... flow measurements are made at intervals not to exceed 5 minutes. The flow rate cut-off test, conducted... definitions. (1) Sample flow rate means the quantitative volumetric flow rate of the air stream caused by...

  20. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The... flow measurements are made at intervals not to exceed 5 minutes. The flow rate cut-off test, conducted... definitions. (1) Sample flow rate means the quantitative volumetric flow rate of the air stream caused by...

  1. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The... flow measurements are made at intervals not to exceed 5 minutes. The flow rate cut-off test, conducted... definitions. (1) Sample flow rate means the quantitative volumetric flow rate of the air stream caused by...

  2. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The... flow measurements are made at intervals not to exceed 5 minutes. The flow rate cut-off test, conducted... definitions. (1) Sample flow rate means the quantitative volumetric flow rate of the air stream caused by...

  3. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The... flow measurements are made at intervals not to exceed 5 minutes. The flow rate cut-off test, conducted... definitions. (1) Sample flow rate means the quantitative volumetric flow rate of the air stream caused by...

  4. The H50Q Mutation Induces a 10-fold Decrease in the Solubility of α-Synuclein*

    PubMed Central

    Porcari, Riccardo; Proukakis, Christos; Waudby, Christopher A.; Bolognesi, Benedetta; Mangione, P. Patrizia; Paton, Jack F. S.; Mullin, Stephen; Cabrita, Lisa D.; Penco, Amanda; Relini, Annalisa; Verona, Guglielmo; Vendruscolo, Michele; Stoppini, Monica; Tartaglia, Gian Gaetano; Camilloni, Carlo; Christodoulou, John; Schapira, Anthony H. V.; Bellotti, Vittorio

    2015-01-01

    The conversion of α-synuclein from its intrinsically disordered monomeric state into the fibrillar cross-β aggregates characteristically present in Lewy bodies is largely unknown. The investigation of α-synuclein variants causative of familial forms of Parkinson disease can provide unique insights into the conditions that promote or inhibit aggregate formation. It has been shown recently that a newly identified pathogenic mutation of α-synuclein, H50Q, aggregates faster than the wild-type. We investigate here its aggregation propensity by using a sequence-based prediction algorithm, NMR chemical shift analysis of secondary structure populations in the monomeric state, and determination of thermodynamic stability of the fibrils. Our data show that the H50Q mutation induces only a small increment in polyproline II structure around the site of the mutation and a slight increase in the overall aggregation propensity. We also find, however, that the H50Q mutation strongly stabilizes α-synuclein fibrils by 5.0 ± 1.0 kJ mol−1, thus increasing the supersaturation of monomeric α-synuclein within the cell, and strongly favors its aggregation process. We further show that wild-type α-synuclein can decelerate the aggregation kinetics of the H50Q variant in a dose-dependent manner when coaggregating with it. These last findings suggest that the precise balance of α-synuclein synthesized from the wild-type and mutant alleles may influence the natural history and heterogeneous clinical phenotype of Parkinson disease. PMID:25505181

  5. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures

    PubMed Central

    Lyle, Matthew J.; Pickard, Chris J.; Needs, Richard J.

    2015-01-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  6. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  7. Dissociating appraisals of accuracy and recollection in autobiographical remembering.

    PubMed

    Scoboria, Alan; Pascal, Lisa

    2016-07-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally distinct, such high correspondence might suggest conceptual redundancy. This article examines whether recollection and accuracy dissociate when studying different types of autobiographical event representations. In Study 1, 278 participants described a believed memory, a nonbelieved memory, and a believed-not-remembered event and rated each on occurrence, recollection, accuracy, and related covariates. In Study 2, 876 individuals described and rated 1 of these events, as well as an event about which they were uncertain about their memory. Confirmatory structural equation modeling indicated that the measurement dissociation between occurrence, recollection and accuracy held across all types of events examined. Relative to believed memories, the relationship between recollection and belief in accuracy was meaningfully lower for the other event types. These findings support the claim that recollection and accuracy arise from distinct underlying mechanisms. (PsycINFO Database Record

  8. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  9. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    PubMed Central

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  10. Prediction of the disulfide-bonding state of cysteines in proteins at 88% accuracy

    PubMed Central

    Martelli, Pier Luigi; Fariselli, Piero; Malaguti, Luca; Casadio, Rita

    2002-01-01

    The task of predicting the cysteine-bonding state in proteins starting from the residue chain is addressed by implementing a new hybrid system that combines a neural network and a hidden Markov model (hidden neural network). Training is performed using 4136 cysteine-containing segments extracted from 969 nonhomologous proteins of well-resolved three-dimensional structure. After a 20-fold cross-validation procedure, the efficiency of the prediction scores as high as 88% and 84%, when measured on cysteine and protein basis, respectively. These results outperform previously described methods for the same task. PMID:12381855

  11. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  12. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  13. [Design and accuracy analysis of upper slicing system of MSCT].

    PubMed

    Jiang, Rongjian

    2013-05-01

    The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.

  14. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  15. Air traffic control surveillance accuracy and update rate study

    NASA Technical Reports Server (NTRS)

    Craigie, J. H.; Morrison, D. D.; Zipper, I.

    1973-01-01

    The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.

  16. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  17. Personalized recommendation based on preferential bidirectional mass diffusion

    NASA Astrophysics Data System (ADS)

    Chen, Guilin; Gao, Tianrun; Zhu, Xuzhen; Tian, Hui; Yang, Zhao

    2017-03-01

    Recommendation system provides a promising way to alleviate the dilemma of information overload. In physical dynamics, mass diffusion has been used to design effective recommendation algorithms on bipartite network. However, most of the previous studies focus overwhelmingly on unidirectional mass diffusion from collected objects to uncollected objects, while overlooking the opposite direction, leading to the risk of similarity estimation deviation and performance degradation. In addition, they are biased towards recommending popular objects which will not necessarily promote the accuracy but make the recommendation lack diversity and novelty that indeed contribute to the vitality of the system. To overcome the aforementioned disadvantages, we propose a preferential bidirectional mass diffusion (PBMD) algorithm by penalizing the weight of popular objects in bidirectional diffusion. Experiments are evaluated on three benchmark datasets (Movielens, Netflix and Amazon) by 10-fold cross validation, and results indicate that PBMD remarkably outperforms the mainstream methods in accuracy, diversity and novelty.

  18. Gesture recognition for smart home applications using portable radar sensors.

    PubMed

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  19. Application of intrinsic time-scale decomposition (ITD) to EEG signals for automated seizure prediction.

    PubMed

    Martis, Roshan Joy; Acharya, U Rajendra; Tan, Jen Hong; Petznick, Andrea; Tong, Louis; Chua, Chua Kuang; Ng, Eddie Yin Kwee

    2013-10-01

    Intrinsic time-scale decomposition (ITD) is a new nonlinear method of time-frequency representation which can decipher the minute changes in the nonlinear EEG signals. In this work, we have automatically classified normal, interictal and ictal EEG signals using the features derived from the ITD representation. The energy, fractal dimension and sample entropy features computed on ITD representation coupled with decision tree classifier has yielded an average classification accuracy of 95.67%, sensitivity and specificity of 99% and 99.5%, respectively using 10-fold cross validation scheme. With application of the nonlinear ITD representation, along with conceptual advancement and improvement of the accuracy, the developed system is clinically ready for mass screening in resource constrained and emerging economy scenarios.

  20. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    PubMed Central

    Yilmaz, Nihat; Inan, Onur

    2013-01-01

    This paper offers a hybrid approach that uses the artificial bee colony (ABC) algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications. PMID:23983632

  1. Glioma Grading Using Cell Nuclei Morphologic Features in Digital Pathology Images

    PubMed Central

    Reza, Syed M. S.; Iftekharuddin, Khan M.

    2016-01-01

    This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients’ images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold cross-validation confirms the efficacy of the proposed method. PMID:27942094

  2. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  3. Portable, high intensity isotopic neutron source provides increased experimental accuracy

    NASA Technical Reports Server (NTRS)

    Mohr, W. C.; Stewart, D. C.; Wahlgren, M. A.

    1968-01-01

    Small portable, high intensity isotopic neutron source combines twelve curium-americium beryllium sources. This high intensity of neutrons, with a flux which slowly decreases at a known rate, provides for increased experimental accuracy.

  4. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  5. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  6. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  7. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  8. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  9. Understanding the delayed-keyword effect on metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  10. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  11. Precision and Accuracy of Topography Measurements on Europa

    NASA Astrophysics Data System (ADS)

    Greenberg, R.; Hurford, T. A.; Foley, M. A.; Varland, K.

    2007-03-01

    Reports of the death of the melt-through model for chaotic terrain on Europa have been greatly exaggerated, to paraphrase Mark Twain. They are based on topographic maps of insufficient quantitative accuracy and precision.

  12. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  13. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  14. Improving classification accuracy and causal knowledge for better credit decisions.

    PubMed

    Wu, Wei-Wen

    2011-08-01

    Numerous studies have contributed to efforts to boost the accuracy of the credit scoring model. Especially interesting are recent studies which have successfully developed the hybrid approach, which advances classification accuracy by combining different machine learning techniques. However, to achieve better credit decisions, it is not enough merely to increase the accuracy of the credit scoring model. It is necessary to conduct meaningful supplementary analyses in order to obtain knowledge of causal relations, particularly in terms of significant conceptual patterns or structures involving attributes used in the credit scoring model. This paper proposes a solution of integrating data preprocessing strategies and the Bayesian network classifier with the tree augmented Na"ıve Bayes search algorithm, in order to improve classification accuracy and to obtain improved knowledge of causal patterns, thus enhancing the validity of credit decisions.

  15. Potential accuracy of measuring the angular coordinates of signal sources and accuracy of measuring them using optimal spatial filtration

    NASA Astrophysics Data System (ADS)

    Kalenov, E. N.

    2015-03-01

    The paper investigates the potential accuracy of measuring the angular coordinates of a signal source in the presence of interference sources, as well as the accuracy of measuring these coordinates via the formation of a signal's spatial spectrum using optimal spatial filtration. For a linear equidistant array, analytical solutions are obtained that determine the dependence of the accuracies in measuring the angular coordinates on the array parameters, the angular distance to the noise source, and spectral power densities of a signal, noise, and an interference source.

  16. Increase in error threshold for quasispecies by heterogeneous replication accuracy

    NASA Astrophysics Data System (ADS)

    Aoki, Kazuhiro; Furusawa, Mitsuru

    2003-09-01

    In this paper we investigate the error threshold for quasispecies with heterogeneous replication accuracy. We show that the coexistence of error-free and error-prone polymerases can greatly increase the error threshold without a catastrophic loss of genetic information. We also show that the error threshold is influenced by the number of replicores. Our research suggests that quasispecies with heterogeneous replication accuracy can reduce the genetic cost of selective evolution while still producing a variety of mutants.

  17. Assessment of the Accuracy of Close Distance Photogrammetric JRC Data

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyun; Poropat, George; Gratchev, Ivan; Balasubramaniam, Arumugam

    2016-11-01

    By using close range photogrammetry, this article investigates the accuracy of the photogrammetric estimation of rock joint roughness coefficients (JRC), a measure of the degree of roughness of rock joint surfaces. This methodology has proven to be convenient both in laboratory and in site conditions. However, the accuracy and precision of roughness profiles obtained from photogrammetric 3D images have not been properly established due to the variances caused by factors such as measurement errors and systematic errors in photogrammetry. In this study, the influences of camera-to-object distance, focal length and profile orientation on the accuracy of JRC values are investigated using several photogrammetry field surveys. Directional photogrammetric JRC data are compared with data derived from the measured profiles, so as to determine their accuracy. The extent of the accuracy of JRC values was examined based on the error models which were previously developed from laboratory tests and revised for better estimation in this study. The results show that high-resolution 3D images (point interval ≤1 mm) can reduce the JRC errors obtained from field photogrammetric surveys. Using the high-resolution images, the photogrammetric JRC values in the range of high oblique camera angles are highly consistent with the revised error models. Therefore, the analysis indicates that the revised error models facilitate the verification of the accuracy of photogrammetric JRC values.

  18. Numerical accuracy of mean-field calculations in coordinate space

    NASA Astrophysics Data System (ADS)

    Ryssens, W.; Heenen, P.-H.; Bender, M.

    2015-12-01

    Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.

  19. Acquisition of decision making criteria: Reward rate ultimately beats accuracy

    PubMed Central

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A.; Holmes, Philip; Cohen, Jonathan D.

    2012-01-01

    Speed-accuracy tradeoffs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy tradeoffs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy tradeoffs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: a) favoring accuracy over reward rate and b) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation. PMID:21264716

  20. Throwing speed and accuracy in baseball and cricket players.

    PubMed

    Freeston, Jonathan; Rooney, Kieron

    2014-06-01

    Throwing speed and accuracy are both critical to sports performance but cannot be optimized simultaneously. This speed-accuracy trade-off (SATO) is evident across a number of throwing groups but remains poorly understood. The goal was to describe the SATO in baseball and cricket players and determine the speed that optimizes accuracy. 20 grade-level baseball and cricket players performed 10 throws at 80% and 100% of maximal throwing speed (MTS) toward a cricket stump. Baseball players then performed a further 10 throws at 70%, 80%, 90%, and 100% of MTS toward a circular target. Baseball players threw faster with greater accuracy than cricket players at both speeds. Both groups demonstrated a significant SATO as vertical error increased with increases in speed; the trade-off was worse for cricketers than baseball players. Accuracy was optimized at 70% of MTS for baseballers. Throwing athletes should decrease speed when accuracy is critical. Cricket players could adopt baseball-training practices to improve throwing performance.