Science.gov

Sample records for 10-fold cross-validation accuracy

  1. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    PubMed Central

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  2. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    PubMed

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  3. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  4. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  5. Cross-Validation.

    ERIC Educational Resources Information Center

    Langmuir, Charles R.

    1954-01-01

    Cross-validation in relation to choosing the best tests and selecting the best items in tests is discussed. Cross-validation demonstrated whether a decision derived from one set of data is truly effective when this decision is applied to another independent, but relevant, sample of people. Cross-validation is particularly important after…

  6. Improving the accuracy of NMR structures of RNA by means of conformational database potentials of mean force as assessed by complete dipolar coupling cross-validation.

    PubMed

    Clore, G Marius; Kuszewski, John

    2003-02-12

    The description of the nonbonded contact terms used in simulated annealing refinement can have a major impact on nucleic acid structures generated from NMR data. Using complete dipolar coupling cross-validation, we demonstrate that substantial improvements in coordinate accuracy of NMR structures of RNA can be obtained by making use of two conformational database potentials of mean force: a nucleic acid torsion angle database potential consisting of various multidimensional torsion angle correlations; and an RNA specific base-base positioning potential that provides a simple geometric, statistically based, description of sequential and nonsequential base-base interactions. The former is based on 416 nucleic acid crystal structures solved at a resolution of

  7. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  8. Cross-Validated Bagged Learning

    PubMed Central

    Petersen, Maya L.; Molinaro, Annette M.; Sinisi, Sandra E.; van der Laan, Mark J.

    2007-01-01

    Many applications aim to learn a high dimensional parameter of a data generating distribution based on a sample of independent and identically distributed observations. For example, the goal might be to estimate the conditional mean of an outcome given a list of input variables. In this prediction context, bootstrap aggregating (bagging) has been introduced as a method to reduce the variance of a given estimator at little cost to bias. Bagging involves applying an estimator to multiple bootstrap samples, and averaging the result across bootstrap samples. In order to address the curse of dimensionality, a common practice has been to apply bagging to estimators which themselves use cross-validation, thereby using cross-validation within a bootstrap sample to select fine-tuning parameters trading off bias and variance of the bootstrap sample-specific candidate estimators. In this article we point out that in order to achieve the correct bias variance trade-off for the parameter of interest, one should apply the cross-validation selector externally to candidate bagged estimators indexed by these fine-tuning parameters. We use three simulations to compare the new cross-validated bagging method with bagging of cross-validated estimators and bagging of non-cross-validated estimators. PMID:19255599

  9. Cost-Benefit Considerations in Choosing among Cross-Validation Methods.

    ERIC Educational Resources Information Center

    Murphy, Kevin R.

    There are two general methods of cross-validation: empirical estimation, and formula estimation. In choosing a specific cross-validation procedure, one should consider both costs (e.g., inefficient use of available data in estimating regression parameters) and benefits (e.g., accuracy in estimating population cross-validity). Empirical…

  10. Cost-Benefit Considerations in Choosing among Cross-Validation Methods.

    ERIC Educational Resources Information Center

    Murphy, Kevin R.

    1984-01-01

    Outlines costs and benefits associated with different cross-validation strategies; in particular the way in which the study design affects the cost and benefits of different types of cross-validation. Suggests that the choice between empirical estimation methods and formula estimates involves a trade-off between accuracy and simplicity. (JAC)

  11. Cross-Validation, Shrinkage, and Multiple Regression.

    ERIC Educational Resources Information Center

    Hynes, Kevin

    One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…

  12. Cross-Validation of the Risk Matrix 2000 Sexual and Violent Scales

    ERIC Educational Resources Information Center

    Craig, Leam A.; Beech, Anthony; Browne, Kevin D.

    2006-01-01

    The predictive accuracy of the newly developed actuarial risk measures Risk Matrix 2000 Sexual/Violence (RMS, RMV) were cross validated and compared with two risk assessment measures (SVR-20 and Static-99) in a sample of sexual (n = 85) and nonsex violent (n = 46) offenders. The sexual offense reconviction rate for the sex offender group was 18%…

  13. Cross validation in LASSO and its acceleration

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-05-01

    We investigate leave-one-out cross validation (CV) as a determinator of the weight of the penalty term in the least absolute shrinkage and selection operator (LASSO). First, on the basis of the message passing algorithm and a perturbative discussion assuming that the number of observations is sufficiently large, we provide simple formulas for approximately assessing two types of CV errors, which enable us to significantly reduce the necessary cost of computation. These formulas also provide a simple connection of the CV errors to the residual sums of squares between the reconstructed and the given measurements. Second, on the basis of this finding, we analytically evaluate the CV errors when the design matrix is given as a simple random matrix in the large size limit by using the replica method. Finally, these results are compared with those of numerical simulations on finite-size systems and are confirmed to be correct. We also apply the simple formulas of the first type of CV error to an actual dataset of the supernovae.

  14. Formula Estimation of Cross-Validated Multiple Correlation.

    ERIC Educational Resources Information Center

    Schmitt, Neal

    A review of cross-validation shrinkage formulas is presented which focuses on the theoretical and practical problems in the use of various formulas. Practical guidelines for use of both formulas and empirical cross-validation are provided. A comparison of results using these formulas in a range of situations is then presented. The result of these…

  15. A K-fold Averaging Cross-validation Procedure

    PubMed Central

    Jung, Yoonsuh; Hu, Jianhua

    2015-01-01

    Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.

  16. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  17. A Cross-Validation of Paulson's Discriminant Function-Derived Scales for Identifying "At Risk" Child-Abusive Parents.

    ERIC Educational Resources Information Center

    Beal, Don; And Others

    1984-01-01

    When the six scales were cross-validated on an independent sample from the population of child-abusing parents, significant shrinkage in the accuracy of prediction was found. The use of the special subscales for identifying "at risk" parents in prenatal clinics, pediatric clinics, and mental health centers as originally suggested by Paulson and…

  18. Estimating the Coefficient of Cross-validity in Multiple Regression: A Comparison of Analytical and Empirical Methods.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1996-01-01

    The accuracy of three analytical formulas for shrinkage estimation and four empirical techniques were investigated in a Monte Carlo study of the coefficient of cross-validity in multiple regression. Substantial statistical bias was evident for all techniques except the formula of M. W. Brown (1975) and multicross-validation. (SLD)

  19. Comprehensive Assessment of Emotional Disturbance: A Cross-Validation Approach

    ERIC Educational Resources Information Center

    Fisher, Emily S.; Doyon, Katie E.; Saldana, Enrique; Allen, Megan Redding

    2007-01-01

    Assessing a student for emotional disturbance is a serious and complex task given the stigma of the label and the ambiguities of the federal definition. One way that school psychologists can be more confident in their assessment results is to cross validate data from different sources using the RIOT approach (Review, Interview, Observe, Test).…

  20. Attrition from an Adolescent Addiction Treatment Program: A Cross Validation.

    ERIC Educational Resources Information Center

    Mathisen, Kenneth S.; Meyers, Kathleen

    Treatment attrition is a major problem for programs treating adolescent substance abusers. To isolate and cross validate factors which are predictive of addiction treatment attrition among adolescent substance abusers, screening interview and diagnostic variables from 119 adolescent in-patients were submitted to a discriminant equation analysis.…

  1. The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).

    ERIC Educational Resources Information Center

    Berryman, Joan D.; Neal, W. R. Jr.

    1980-01-01

    Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…

  2. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  3. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and

  4. The Proximal Trajectory Algorithm in SVM Cross Validation.

    PubMed

    Astorino, Annabella; Fuduli, Antonio

    2016-05-01

    We propose a bilevel cross-validation scheme for support vector machine (SVM) model selection based on the construction of the entire regularization path. Since such path is a particular case of the more general proximal trajectory concept from nonsmooth optimization, we propose for its construction an algorithm based on solving a finite number of structured linear programs. Our methodology, differently from other approaches, works directly on the primal form of SVM. Numerical results are presented on binary data sets drawn from literature. PMID:27101080

  5. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    PubMed

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones. PMID:26186536

  6. International cross-validation of a BOD5 surrogate.

    PubMed

    Muller, Mathieu; Bouguelia, Sihem; Goy, Romy-Alice; Yoris, Alison; Berlin, Jeanne; Meche, Perrine; Rocher, Vincent; Mertens, Sharon; Dudal, Yves

    2014-12-01

    BOD5 dates back to 1912 when the Royal Commission decided to use the mean residence time of water in the rivers of England, 5 days, as a standard to measure the biochemical oxygen demand. Initially designed to protect the quality of river waters from extensive sewage discharge, the use of BOD5 has been quickly extended to waste water treatment plants (WWTPs) to monitor their efficiency on a daily basis. The measurement has been automatized but remains a tedious, time- and resource-consuming analysis. We have cross-validated a surrogate BOD5 method on two sites in France and in the USA with a total of 109 samples. This method uses a fluorescent redox indicator on a 96-well microplate to measure microbial catabolic activity for a large number of samples simultaneously. Three statistical tests were used to compare surrogate and reference methods and showed robust equivalence. PMID:24946712

  7. Cross-validating factors associated with discharges against medical advice.

    PubMed

    Dalrymple, A J; Fata, M

    1993-05-01

    Between six percent and 35% of psychiatric patients discharge themselves from hospital against medical advice (AMA). The discharges may prevent patients from deriving the full benefit of hospitalization and may result in rapid rehospitalization. We examined sociodemographic and clinical characteristics of 195 irregular discharges from a 237 bed psychiatric hospital over a five year period and found that AMA discharges increased over the study period to a peak of 25% in 1986. There was a strong negative correlation between AMA discharge rates and the willingness of physicians to commit patients involuntarily. Multiple discriminant analysis revealed a set of nine variables that accurately classified 78% of cases into regular or irregular discharge categories. Further analysis revealed that there are two distinct subgroups of patients who discharge themselves AMA: those who repeatedly left the hospital AMA in a regular "revolving back door" pattern and those who left AMA only once. The repeat group exceeded the one-time group in terms of prior admissions, appearances before review boards, and percentage of Natives. The repeat group also spent twice as long in hospital, and 27% were readmitted within one-week of the index AMA discharge. Less than three percent of the one-time AMA group was readmitted within a week. These results were cross-validated on a new sample of irregular discharges and matched controls. PMID:8518982

  8. The Importance of Evaluating Whether Results Will Generalize: Application of Cross-Validation in Discriminant Analysis.

    ERIC Educational Resources Information Center

    Loftin, Lynn B.

    Cross-validation, an economical method for assessing whether sample results will generalize, is discussed in this paper. Cross-validation is an invariance technique that uses two subsets of the data sample to derive discriminant function coefficients. The two sets of coefficients are then used with each data subset to derive discriminant function…

  9. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  10. Splenectomy Causes 10-Fold Increased Risk of Portal Venous System Thrombosis in Liver Cirrhosis Patients

    PubMed Central

    Qi, Xingshun; Han, Guohong; Ye, Chun; Zhang, Yongguo; Dai, Junna; Peng, Ying; Deng, Han; Li, Jing; Hou, Feifei; Ning, Zheng; Zhao, Jiancheng; Zhang, Xintong; Wang, Ran; Guo, Xiaozhong

    2016-01-01

    Background Portal venous system thrombosis (PVST) is a life-threatening complication of liver cirrhosis. We conducted a retrospective study to comprehensively analyze the prevalence and risk factors of PVST in liver cirrhosis. Material/Methods All cirrhotic patients without malignancy admitted between June 2012 and December 2013 were eligible if they underwent contrast-enhanced CT or MRI scans. Independent predictors of PVST in liver cirrhosis were calculated in multivariate analyses. Subgroup analyses were performed according to the severity of PVST (any PVST, main portal vein [MPV] thrombosis >50%, and clinically significant PVST) and splenectomy. Odds ratios (ORs) and 95% confidence intervals (CIs) were reported. Results Overall, 113 cirrhotic patients were enrolled. The prevalence of PVST was 16.8% (19/113). Splenectomy (any PVST: OR=11.494, 95%CI=2.152–61.395; MPV thrombosis >50%: OR=29.987, 95%CI=3.247–276.949; clinically significant PVST: OR=40.415, 95%CI=3.895–419.295) and higher hemoglobin (any PVST: OR=0.974, 95%CI=0.953–0.996; MPV thrombosis >50%: OR=0.936, 95%CI=0.895–0.980; clinically significant PVST: OR=0.935, 95%CI=0.891–0.982) were the independent predictors of PVST. The prevalence of PVST was 13.3% (14/105) after excluding splenectomy. Higher hemoglobin was the only independent predictor of MPV thrombosis >50% (OR=0.952, 95%CI=0.909–0.997). No independent predictors of any PVST or clinically significant PVST were identified in multivariate analyses. Additionally, PVST patients who underwent splenectomy had a significantly higher proportion of clinically significant PVST but lower MELD score than those who did not undergo splenectomy. In all analyses, the in-hospital mortality was not significantly different between cirrhotic patient with and without PVST. Conclusions Splenectomy may increase by at least 10-fold the risk of PVST in liver cirrhosis independent of severity of liver dysfunction. PMID:27432511

  11. Development and cross-validation of prediction equations for estimating resting energy expenditure in severely obese Caucasian children and adolescents.

    PubMed

    Lazzer, Stefano; Agosti, Fiorenza; De Col, Alessandra; Sartorio, Alessandro

    2006-11-01

    The objectives of the present study were to develop and cross-validate new equations for predicting resting energy expenditure (REE) in severely obese children and adolescents, and to determine the accuracy of new equations using the Bland-Altman method. The subjects of the study were 574 obese Caucasian children and adolescents (mean BMI z-score 3.3). REE was determined by indirect calorimetry and body composition by bioelectrical impedance analysis. Equations were derived by stepwise multiple regression analysis using a calibration cohort of 287 subjects and the equations were cross-validated in the remaining 287 subjects. Two new specific equations based on anthropometric parameters were generated as follows: (1) REE=(Sex x 892.68)-(Age x 115.93)+(Weight x 54.96)+(Stature x 1816.23)+1484.50 (R(2) 0.66; se 1028.97 kJ); (2) REE=(Sex x 909.12)-(Age x 107.48)+(fat-free mass x 68.39)+(fat mass x 55.19)+3631.23 (R(2) 0.66; se 1034.28 kJ). In the cross-validation group, mean predicted REE values were not significantly different from the mean measured REE for all children and adolescents, as well as for boys and for girls (difference <2 %) and the limits of agreement (+/-2 sd) were +2.06 and -1.77 MJ/d (NS). The new prediction equations allow an accurate estimation of REE in groups of severely obese children and adolescents. These equations might be useful for health care professionals and researchers when estimating REE in severely obese children and adolescents. PMID:17092390

  12. The Performance of Cross-Validation Indices Used to Select among Competing Covariance Structure Models under Multivariate Nonnormality Conditions

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Stapleton, Laura M.

    2006-01-01

    Cudeck and Browne (1983) proposed using cross-validation as a model selection technique in structural equation modeling. The purpose of this study is to examine the performance of eight cross-validation indices under conditions not yet examined in the relevant literature, such as nonnormality and cross-validation design. The performance of each…

  13. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities. PMID:26305321

  14. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  15. Airborne environmental endotoxin: a cross-validation of sampling and analysis techniques.

    PubMed Central

    Walters, M; Milton, D; Larsson, L; Ford, T

    1994-01-01

    A standard method for measurement of airborne environmental endotoxin was developed and field tested in a fiberglass insulation-manufacturing facility. This method involved sampling with a capillary-pore membrane filter, extraction in buffer using a sonication bath, and analysis by the kinetic-Limulus assay with resistant-parallel-line estimation (KLARE). Cross-validation of the extraction and assay method was performed by comparison with methanolysis of samples followed by 3-hydroxy fatty acid (3-OHFA) analysis by gas chromatography-mass spectrometry. Direct methanolysis of filter samples and methanolysis of buffer extracts of the filters yielded similar 3-OHFA content (P = 0.72); the average difference was 2.1%. Analysis of buffer extracts for endotoxin content by the KLARE method and by gas chromatography-mass spectrometry for 3-OHFA content produced similar results (P = 0.23); the average difference was 0.88%. The source of endotoxin was gram-negative bacteria growing in recycled washwater used to clean the insulation-manufacturing equipment. The endotoxin and bacteria become airborne during spray cleaning operations. The types of 3-OHFAs in bacteria cultured from the washwater, present in the washwater and in the air, were similar. Virtually all of the bacteria cultured from air and water were gram negative composed mostly of two species, Deleya aesta and Acinetobacter johnsonii. Airborne countable bacteria correlated well with endotoxin (r2 = 0.64). Replicate sampling showed that results with the standard sampling, extraction, and Limulus assay by the KLARE method were highly reproducible (95% confidence interval for endotoxin measurement +/- 0.28 log10). These results demonstrate the accuracy, precision, and sensitivity of the standard procedure proposed for airborne environmental endotoxin. PMID:8161191

  16. Application of Discriminant Analysis and Cross-Validation on Proteomics Data.

    PubMed

    Kuligowski, Julia; Pérez-Guaita, David; Quintás, Guillermo

    2016-01-01

    High-throughput proteomic experiments have raised the importance and complexity of bioinformatic analysis to extract useful information from raw data. Discriminant analysis is frequently used to identify differences among test groups of individuals or to describe combinations of discriminant variables. However, even in relatively large studies, the number of detected variables typically largely exceeds the number of samples and the classifiers should be thoroughly validated to assess their performance for new samples. Cross-validation is a widely approach when an external validation set is not available. In this chapter, different approaches for cross-validation are presented including relevant aspects that should be taken into account to avoid overly optimistic results and the assessment of the statistical significance of cross-validated figures of merit. PMID:26519177

  17. Cross-validation of component models: a critical look at current methods.

    PubMed

    Bro, R; Kjeldahl, K; Smilde, A K; Kiers, H A L

    2008-03-01

    In regression, cross-validation is an effective and popular approach that is used to decide, for example, the number of underlying features, and to estimate the average prediction error. The basic principle of cross-validation is to leave out part of the data, build a model, and then predict the left-out samples. While such an approach can also be envisioned for component models such as principal component analysis (PCA), most current implementations do not comply with the essential requirement that the predictions should be independent of the entity being predicted. Further, these methods have not been properly reviewed in the literature. In this paper, we review the most commonly used generic PCA cross-validation schemes and assess how well they work in various scenarios. PMID:18214448

  18. Improving the Accuracy of Daily PM2.5 Distributions Derived from the Fusion of Ground-Level Measurements with Aerosol Optical Depth Observations, a Case Study in North China.

    PubMed

    Lv, Baolei; Hu, Yongtao; Chang, Howard H; Russell, Armistead G; Bai, Yuqi

    2016-05-01

    The accuracy in estimated fine particulate matter concentrations (PM2.5), obtained by fusing of station-based measurements and satellite-based aerosol optical depth (AOD), is often reduced without accounting for the spatial and temporal variations in PM2.5 and missing AOD observations. In this study, a city-specific linear regression model was first developed to fill in missing AOD data. A novel interpolation-based variable, PM2.5 spatial interpolator (PMSI2.5), was also introduced to account for the spatial dependence in PM2.5 across grid cells. A Bayesian hierarchical model was then developed to estimate spatiotemporal relationships between AOD and PM2.5. These methods were evaluated through a city-specific 10-fold cross-validation procedure in a case study in North China in 2014. The cross validation R(2) was 0.61 when PMSI2.5 was included and 0.48 when PMSI2.5 was excluded. The gap-filled AOD values also effectively improved predicted PM2.5 concentrations with an R(2) = 0.78. Daily ground-level PM2.5 concentration fields at a 12 km resolution were predicted with complete spatial and temporal coverage. This study also indicates that model prediction performance should be assessed by accounting for monitor clustering due to the potential misinterpretation of model accuracy in spatial prediction when validation monitors are randomly selected. PMID:27043852

  19. Development and cross-validation of prognostic models to assess the treatment effect of cisplatin/pemetrexed chemotherapy in lung adenocarcinoma patients.

    PubMed

    Mou, Wenjun; Liu, Zhaoqi; Luo, Yuan; Zou, Meng; Ren, Chao; Zhang, Chunyan; Wen, Xinyu; Wang, Yong; Tian, Yaping

    2014-09-01

    Better understanding of the treatment effect of cisplatin/pemetrexed chemotherapy on lung adenocarcinoma patients is needed to facilitate chemotherapy planning and patient care. In this retrospective study, we will develop prognostic models by the cross-validation method using clinical and serum factors to predict outcomes of cisplatin/pemetrexed chemotherapy in lung adenocarcinoma patients. Lung adenocarcinoma patients admitted between 2008 and 2013 were enrolled. 29 serum parameters of laboratory tests and 14 clinical factors were analyzed to develop the prognostic models. First, the stepwise selection and five-fold cross-validation were performed to identify candidate prognostic factors. Then a classification of all patients based on the number of metastatic sites resulted in four distinct subsets. In each subset, a prognostic model was fitted with the most accurate prognostic factors from the candidate prognostic factors. Categorical survival prediction was estimated using a log-rank test and visualized with Kaplan-Meier method. 227 lung adenocarcinoma patients were enrolled. Twenty candidate prognostic factors evaluated using the five-fold cross-validation method were total protein, total bilirubin, direct bilirubin, creatine kinase, age, smoking index, neuron-specific enolase, bone metastasis, total triglyceride, albumin, gender, uric acid, CYFRA21-1, lymph node metastasis, liver metastasis, lactate dehydrogenase, CA153, peritoneal metastasis, CA125, and CA199. From these 20 candidate prognostic factors, the multivariate Cox proportional hazard model with the highest prognostic accuracy in each subset was identified by the stepwise forward selection method, which generated significant prognostic stratifications in Kaplan-Meier survival analyses (all log-rank p < 0.01). Generally, the prognostic models using five-fold cross-validation achieve a good prediction performance. The prognostic models can be administered safely to lung adenocarcinoma patients treated

  20. Cross-Validation and Extension of the MMPI-A IMM Scale.

    ERIC Educational Resources Information Center

    Zinn, Sandra; McCumber, Stacey; Dahlstrom, W. Grant

    1999-01-01

    Cross-validated the IMM scale of the Minnesota Multiphasic Personality Inventory-Adolescents (MMPI-A), a measure of ego level, with 151 college students. Means and standard deviations were obtained on IMM scale from the MMPI-A and another MMPI version for males and females. (SLD)

  1. Cross-Validation of the Quick Word Test as an Estimator of Adult Mental Ability

    ERIC Educational Resources Information Center

    Grotelueschen, Arden; McQuarrie, Duncan

    1970-01-01

    This report provides additional evidence that the Quick Word Test (Level 2, Form AM) is valid for estimating adult mental ability as defined by the Wechsler Adult Intelligence Scale. The validation sample is also described to facilitate use of the conversion table developed in the cross-validation analysis. (Author/LY)

  2. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  3. Cross Validation and Discriminative Analysis Techniques in a College Student Attrition Application.

    ERIC Educational Resources Information Center

    Smith, Alan D.

    1982-01-01

    Used a current attrition study to show the usefulness of discriminative analysis and a cross validation technique applied to student nonpersister questionnaire respondents and nonrespondents. Results of the techniques allowed delineation of several areas of sample under-representation and established the instability of the regression weights…

  4. Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

    ERIC Educational Resources Information Center

    Acar, Tu¨lin

    2014-01-01

    In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…

  5. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  6. Cross-validation and calibration of Jackson-Pollock equations with DXA: the TIGER study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Jackson-Pollock (J&P) body composition equations were developed primarily from data on white men and women using hydrostatically determined body density (BD) as the criterion measure. This study cross-validated the J&P equations with ethnically diverse subjects and percent fat (%fat) determined ...

  7. Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth

    ERIC Educational Resources Information Center

    Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…

  8. Methodology Review: Estimation of Population Validity and Cross-Validity, and the Use of Equal Weights in Prediction.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1997-01-01

    This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)

  9. Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…

  10. A cross-validation scheme for machine learning algorithms in shotgun proteomics

    PubMed Central

    2012-01-01

    Peptides are routinely identified from mass spectrometry-based proteomics experiments by matching observed spectra to peptides derived from protein databases. The error rates of these identifications can be estimated by target-decoy analysis, which involves matching spectra to shuffled or reversed peptides. Besides estimating error rates, decoy searches can be used by semi-supervised machine learning algorithms to increase the number of confidently identified peptides. As for all machine learning algorithms, however, the results must be validated to avoid issues such as overfitting or biased learning, which would produce unreliable peptide identifications. Here, we discuss how the target-decoy method is employed in machine learning for shotgun proteomics, focusing on how the results can be validated by cross-validation, a frequently used validation scheme in machine learning. We also use simulated data to demonstrate the proposed cross-validation scheme's ability to detect overfitting. PMID:23176259

  11. Cross-validation of interferometric synthetic aperture microscopy and optical coherence tomography.

    PubMed

    Ralston, Tyler S; Adie, Steven G; Marks, Daniel L; Boppart, Stephen A; Carney, P Scott

    2010-05-15

    Computationally reconstructed interferometric synthetic aperture microscopy is coregistered with optical coherence tomography (OCT) focal plane data to provide quantitative cross validation with OCT. This is accomplished through a qualitative comparison of images and a quantitative analysis of the width of the point-spread function in simulation and experiment. The width of the ISAM point-spread function is seen to be independent of depth, in contrast to OCT. PMID:20479849

  12. A universal approximate cross-validation criterion for regular risk functions.

    PubMed

    Commenges, Daniel; Proust-Lima, Cécile; Samieri, Cécilia; Liquet, Benoit

    2015-05-01

    Selection of estimators is an essential task in modeling. A general framework is that the estimators of a distribution are obtained by minimizing a function (the estimating function) and assessed using another function (the assessment function). A classical case is that both functions estimate an information risk (specifically cross-entropy); this corresponds to using maximum likelihood estimators and assessing them by Akaike information criterion (AIC). In more general cases, the assessment risk can be estimated by leave-one-out cross-validation. Since leave-one-out cross-validation is computationally very demanding, we propose in this paper a universal approximate cross-validation criterion under regularity conditions (UACVR). This criterion can be adapted to different types of estimators, including penalized likelihood and maximum a posteriori estimators, and also to different assessment risk functions, including information risk functions and continuous rank probability score (CRPS). UACVR reduces to Takeuchi information criterion (TIC) when cross-entropy is the risk for both estimation and assessment. We provide the asymptotic distributions of UACVR and of a difference of UACVR values for two estimators. We validate UACVR using simulations and provide an illustration on real data both in the psychometric context where estimators of the distributions of ordered categorical data derived from threshold models and models based on continuous approximations are compared. PMID:25849800

  13. Novel method for quantifying radiation-induced single-strand-break yields in plasmid DNA highlights 10-fold discrepancy.

    PubMed

    Balagurumoorthy, Pichumani; Adelstein, S James; Kassis, Amin I

    2011-10-15

    The widely used agarose gel electrophoresis method for assessing radiation-induced single-strand-break (SSB) yield in plasmid DNA involves measurement of the fraction of relaxed-circular (C) form that migrates independently from the intact supercoiled (SC) form. We rationalized that this method may underestimate the SSB yield since the position of the relaxed-circular form is not altered when the number of SSB per DNA molecule is >1. To overcome this limitation, we have developed a novel method that directly probes and quantifies SSBs. Supercoiled (3)H-pUC19 plasmid samples were irradiated with γ-rays, alkali-denatured, dephosphorylated, and kinated with γ-[(32)P]ATP, and the DNA-incorporated (32)P activities were used to quantify the SSB yields per DNA molecule, employing a standard curve generated using DNA molecules containing a known number of SSBs. The same irradiated samples were analyzed by agarose gel and SSB yields were determined by conventional methods. Comparison of the data demonstrated that the mean SSB yield per plasmid DNA molecule of [21.2±0.59]×10(-2)Gy(-1) as measured by direct probing is ~10-fold higher than that obtained from conventional gel-based methods. These findings imply that the SSB yields inferred from agarose gels need reevaluation, especially when they were utilized in the determination of radiation risk. PMID:21741945

  14. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  15. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  16. The German Bight: Preparing for Sentinel-3 wit a Cross Validation of SAR and PLRM CryoSat-2 Altimeter Data

    NASA Astrophysics Data System (ADS)

    Fenoglio-Marc, L.; Buchhaupt, C.; Dinardo, S.; Scharroo, R.; Benveniste, J.; Becker, M.

    2015-12-01

    As preparatory work for Sentinel-3, we retrieve the three geophysical parameters: sea surface height (SSH), significant wave height (SWH) and wind speed at 10 meters height (U10) from CryoSat-2 data in our validation region in North Sea. The CryoSat-2 SAR echoes are processed with a coherent and an incoherent processing to generate SAR and PLRM data respectively. We derive precision and accuracy at 1 Hz in open ocean, at distances larger than 10 kilometres from the coast. A cross-validation of the SAR and PLRM altimeter data is performed to investigate the differences between the products. Look Up Tables (LUT) are applied in both schemes to correct for approximations applied in both retracking procedures. Additionally a numerical retracker is used in PLRM. The results are validated against in-situ and model data. The analysis is performed for a period of four years, from July 2010 to May 2014. The regional cross-validation analysis confirms the good consistency between PLRM and SAR data. Using LUT the agreement for the sea wave heights increases by 10%.

  17. Variational cross-validation of slow dynamical modes in molecular kinetics

    PubMed Central

    Pande, Vijay S.

    2015-01-01

    Markov state models are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long-timescale statistical kinetics and slow dynamical modes of biomolecular systems. However, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non-experts applying these methods to novel biological systems. Here, we consider cross-validation with a new objective function for estimators of these slow dynamical modes, a generalized matrix Rayleigh quotient (GMRQ), which measures the ability of a rank-m projection operator to capture the slow subspace of the system. It is shown that a variational theorem bounds the GMRQ from above by the sum of the first m eigenvalues of the system’s propagator, but that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. This overfitting can be detected and avoided through cross-validation. These result make it possible to construct Markov state models for protein dynamics in a way that appropriately captures the tradeoff between systematic and statistical errors. PMID:25833563

  18. Predicting IQ change from brain structure: A cross-validation study

    PubMed Central

    Price, C.J.; Ramsden, S.; Hope, T.M.H.; Friston, K.J.; Seghier, M.L.

    2013-01-01

    Procedures that can predict cognitive abilities from brain imaging data are potentially relevant to educational assessments and studies of functional anatomy in the developing brain. Our aim in this work was to quantify the degree to which IQ change in the teenage years could be predicted from structural brain changes. Two well-known k-fold cross-validation analyses were applied to data acquired from 33 healthy teenagers – each tested at Time 1 and Time 2 with a 3.5 year interval. One approach, a Leave-One-Out procedure, predicted IQ change for each subject on the basis of structural change in a brain region that was identified from all other subjects (i.e., independent data). This approach predicted 53% of verbal IQ change and 14% of performance IQ change. The other approach used half the sample, to identify regions for predicting IQ change in the other half (i.e., a Split half approach); however – unlike the Leave-One-Out procedure – regions identified using half the sample were not significant. We discuss how these out-of-sample estimates compare to in-sample estimates; and draw some recommendations for k-fold cross-validation procedures when dealing with small datasets that are typical in the neuroimaging literature. PMID:23567505

  19. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2016-05-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  20. A cross-validation of two differing measures of hypnotic depth.

    PubMed

    Pekala, Ronald J; Maurer, Ronald L

    2013-01-01

    Several sets of regression analyses were completed, attempting to predict 2 measures of hypnotic depth: the self-reported hypnotic depth score and hypnoidal state score from variables of the Phenomenology of Consciousness Inventory: Hypnotic Assessment Procedure (PCI-HAP). When attempting to predict self-reported hypnotic depth, an R of .78 with Study 1 participants shrank to an r of .72 with Study 2 participants, suggesting mild shrinkage for this more attributional measure of hypnotic depth. Attempting to predict hypnoidal state (an estimate of trance) using the same procedure, yielded an R of .56, that upon cross-validation shrank to an r of .48. These and other results suggest that, although there is some variance in common, the self-reported hypnotic depth score appears to be tapping a different construct from the hypnoidal state score. PMID:23153387

  1. Statistical analysis of GeneMark performance by cross-validation.

    PubMed

    Kleffe, J; Hermann, K; Borodovsky, M

    1996-03-01

    We have explored the performance of the GeneMark gene identification method using cross-validation over learning samples of E. coli DNA sequences. The computations gave more accurate estimations of the error rates in comparison with previous results when a sample of non-coding regions was derived from GenBank sequences with many true coding regions unannotated. The error rate components have been classified and delineated. It was shown that the method performs differently on class I, II and III genes. The most frequent errors come from misinterpreting the coding potential of the complementary sequence in the same frame. The effects of stop-codons present in alternative frames were also studied to understand better the main factors contributing to GeneMark performance. PMID:16749185

  2. Error criteria for cross validation in the context of chaotic time series prediction

    NASA Astrophysics Data System (ADS)

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  3. Multisample cross-validation of a model of childhood posttraumatic stress disorder symptomatology.

    PubMed

    Anthony, Jason L; Lonigan, Christopher J; Vernberg, Eric M; Greca, Annette M La; Silverman, Wendy K; Prinstein, Mitchell J

    2005-12-01

    This study is the latest advancement of our research aimed at best characterizing children's posttraumatic stress reactions. In a previous study, we compared existing nosologic and empirical models of PTSD dimensionality and determined the superior model was a hierarchical one with three symptom clusters (Intrusion/Active Avoidance, Numbing/Passive Avoidance, and Arousal; Anthony, Lonigan, & Hecht, 1999). In this study, we cross-validate this model in two populations. Participants were 396 fifth graders who were exposed to either Hurricane Andrew or Hurricane Hugo. Multisample confirmatory factor analysis demonstrated the model's factorial invariance across populations who experienced traumatic events that differed in severity. These results show the model's robustness to characterize children's posttraumatic stress reactions. Implications for diagnosis, classification criteria, and an empirically supported theory of PTSD are discussed. PMID:16382435

  4. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method. PMID:18255545

  5. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  6. Evaluating Processes, Parameters and Observations Using Cross Validation and Computationally Frugal Sensitivity Analysis Methods

    NASA Astrophysics Data System (ADS)

    Foglia, L.; Mehl, S.; Hill, M. C.

    2013-12-01

    Sensitivity analysis methods are used to identify measurements most likely to provide important information for model development and predictions and therefore identify critical processes. Methods range from computationally demanding Monte Carlo and cross-validation methods, to very computationally efficient linear methods. The methods are able to account for interrelations between parameters, but some argue that because linear methods neglect the effects of model nonlinearity, they are not worth considering when examining complex, nonlinear models of environmental systems. However, when faced with computationally demanding models needed to simulate, for example, climate change, the chance of obtaining fundamental insights (such as important and relationships between predictions and parameters) with few model runs is tempting. In the first part of this work, comparisons of local sensitivity analysis and cross-validation are conducted using a nonlinear groundwater model of the Maggia Valley, Southern Switzerland; sensitivity analysis are then applied to an integrated hydrological model of the same system where the impact of more processes and of using different sets of observations on the model results are considered; applicability to models of a variety of situations (climate, water quality, water management) is inferred. Results show that the frugal linear methods produced about 70% of the insight from about 2% of the model runs required by the computationally demanding methods. Regarding important observations, linear methods were not always able to distinguish between moderately and unimportant observations. However, they consistently identified the most important observations which are critical to characterize relationships between parameters and to assess the worth of potential new data collection efforts. Importance both to estimate parameters and predictions of interest was readily identified. The results suggest that it can be advantageous to consider local

  7. Fit-for-purpose bioanalytical cross-validation for LC-MS/MS assays in clinical studies.

    PubMed

    Xu, Xiaohui; Ji, Qin C; Jemal, Mohammed; Gleason, Carol; Shen, Jim X; Stouffer, Bruce; Arnold, Mark E

    2013-01-01

    The paradigm shift of globalized research and conducting clinical studies at different geographic locations worldwide to access broader patient populations has resulted in increased need of correlating bioanalytical results generated in multiple laboratories, often across national borders. Cross-validations of bioanalytical methods are often implemented to assure the equivalency of the bioanalytical results is demonstrated. Regulatory agencies, such as the US FDA and European Medicines Agency, have included the requirement of cross-validations in their respective bioanalytical validation guidance and guidelines. While those documents provide high-level expectations, the detailed implementation is at the discretion of each individual organization. At Bristol-Myers Squibb, we practice a fit-for-purpose approach for conducting cross-validations for small-molecule bioanalytical methods using LC-MS/MS. A step-by-step proposal on the overall strategy, procedures and technical details for conducting a successful cross-validation is presented herein. A case study utilizing the proposed cross-validation approach to rule out method variability as the potential cause for high variance observed in PK studies is also presented. PMID:23256474

  8. Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution.

    PubMed

    Wang, Yu; Li, Jihong

    2016-08-01

    In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the

  9. Inversion of velocity map ion images using iterative regularization and cross validation

    NASA Astrophysics Data System (ADS)

    Renth, F.; Riedel, J.; Temps, F.

    2006-03-01

    Two methods for improved inversion of velocity map images are presented. Both schemes use two-dimensional basis functions to perform the iteratively regularized inversion of the imaging equation in matrix form. The quality of the reconstructions is improved by taking into account the constraints that are derived from prior knowledge about the experimental data, such as non-negativity and noise statistics, using (i) the projected Landweber [Am. J. Math. 73, 615 (1951)] and (ii) the Richardson-Lucy [J. Opt. Soc. Am. 62, 55 (1972); Astron. J. 79, 745 (1974)] algorithms. It is shown that the optimum iteration count, which plays the role of a regularization parameter, can be determined by partitioning the image into quarters or halves and a subsequent cross validation of the inversion results. The methods are tested with various synthetic velocity map images and with velocity map images of the H-atom fragments produced in the photodissociation of HBr at λ =243.1nm using a (2+1) resonantly enhanced multiphoton ionization (REMPI) detection scheme. The versatility of the method, which is only determined by the choice of basis functions, is exploited to take into account the photoelectron recoil that leads to a splitting and broadening of the velocity distribution in the two product channels, and to successfully reconstruct the deconvolved velocity distribution. The methods can also be applied to the cases where higher order terms in the Legendre expansion of the angular distribution are present.

  10. Some psychometric properties of the Chinese version of the Modified Dental Anxiety Scale with cross validation

    PubMed Central

    Yuan, Siyang; Freeman, Ruth; Lahti, Satu; Lloyd-Williams, Ffion; Humphris, Gerry

    2008-01-01

    Objective To assess the factorial structure and construct validity for the Chinese version of the Modified Dental Anxiety Scale (MDAS). Materials and methods A cross-sectional survey was conducted in March 2006 from adults in the Beijing area. The questionnaire consisted of sections to assess for participants' demographic profile and dental attendance patterns, the Chinese MDAS and the anxiety items from the Hospital Anxiety and Depression Scale (HADS). The analysis was conducted in two stages using confirmatory factor analysis and structural equation modelling. Cross validation was tested with a North West of England comparison sample. Results 783 questionnaires were successfully completed from Beijing, 468 from England. The Chinese MDAS consisted of two factors: anticipatory dental anxiety (ADA) and treatment dental anxiety (TDA). Internal consistency coefficients (tau non-equivalent) were 0.74 and 0.86 respectively. Measurement properties were virtually identical for male and female respondents. Relationships of the Chinese MDAS with gender, age and dental attendance supported predictions. Significant structural parameters between the two sub-scales (negative affectivity and autonomic anxiety) of the HADS anxiety items and the two newly identified factors of the MDAS were confirmed and duplicated in the comparison sample. Conclusion The Chinese version of the MDAS has good psychometric properties and has the ability to assess, briefly, overall dental anxiety and two correlated but distinct aspects. PMID:18364045

  11. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  12. A Cross-Validation of easyCBM[R] Mathematics Cut Scores in Oregon: 2009-2010. Technical Report #1104

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in Oregon. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Oregon state test was used as the…

  13. Cross-Validating Measures of Technology Integration: A First Step toward Examining Potential Relationships between Technology Integration and Student Achievement

    ERIC Educational Resources Information Center

    Hancock, Robert; Knezek, Gerald; Christensen, Rhonda

    2007-01-01

    The use of proper measurements of diffusion of information technology as an innovation are essential to determining if progress is being made in state, regional, and national level programs. This project provides a national level cross validation study of several instruments commonly used to assess the effectiveness of technology integration in…

  14. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  15. The brief cognitive assessment tool (BCAT): cross-validation in a community dwelling older adult sample.

    PubMed

    MacDougall, Elizabeth E; Mansbach, William E; Clark, Kristen; Mace, Ryan A

    2014-08-13

    ABSTRACT Background: Cognitive impairment is underrecognized and misdiagnosed among community-dwelling older adults. At present, there is no consensus about which cognitive screening tool represents the "gold standard." However, one tool that shows promise is the Brief Cognitive Assessment Tool (BCAT), which was originally validated in an assisted living sample and contains a multi-level memory component (e.g. word lists and story recall items) and complex executive functions features (e.g. judgment, set-shifting, and problem-solving). Methods: The present study cross-validated the BCAT in a sample of 75 community-dwelling older adults. Participants completed a short battery of several individually administered cognitive tests, including the BCAT and the Montreal Cognitive Assessment (MoCA). Using a very conservative MoCA cut score of <26, the base rate of cognitive impairment in this sample was 35%. Results: Adequate internal consistency and strong evidence of construct validity were found. A receiver operating characteristic (ROC) curve was calculated from sensitivity and 1-specificity values for the classification of cognitively impaired versus cognitively unimpaired. The area under the ROC curve (AUC) for the BCAT was .90, p < 0.001, 95% CI [0.83, 0.97]. A BCAT cut-score of 45 (scores below 45 suggesting cognitive impairment) resulted in the best balance between sensitivity (0.81) and specificity (0.80). Conclusions: A BCAT cut-score can be used for identifying persons to be referred to appropriate healthcare professionals for more comprehensive cognitive assessment. In addition, guidelines are provided for clinicians to interpret separate BCAT memory and executive dysfunction component scores. PMID:25115580

  16. Cross-validation of a composite pain scale for preschool children within 24 hours of surgery.

    PubMed

    Suraseranivongse, S; Santawat, U; Kraiprasit, K; Petcharatana, S; Prakkamodom, S; Muntraporn, N

    2001-09-01

    This study was designed to cross-validate a composite measure of the pain scales CHEOPS (Children's Hospital of Eastern Ontario Pain Scale), OPS (Objective Pain Scale, simplified for parent use by replacing blood pressure measurement with observation of body language or posture), TPPPS (Toddler Preschool Postoperative Pain Scale) and FLACC (Face, Legs, Activity, Cry, Consolability) in 167 Thai children aged 1-5.5 yr. The pain scales were translated and tested for content, construct and concurrent validity, including inter-rater and intra-rater reliabilities. Discriminative validity in immediate and persistent pain for the age groups < or =3 and >3 yr were also studied. The children's behaviour was videotaped before and after surgery, before analgesia had been given in the post-anaesthesia care unit (PACU), and on the ward. Four observers then rated pain behaviour from rearranged videotapes. The decision to treat pain was based on routine practice and was made by a researcher unaware of the rating procedure. All tools had acceptable content validity and excellent inter-rater and intra-rater reliabilities (intraclass correlation >0.9 and >0.8 respectively). Construct validity was determined by the ability to differentiate the group with no pain before surgery and a high pain level after surgery, before analgesia (P<0.001). The positive correlations among all scales in the PACU and on the ward (r=0.621-0.827, P<0.0001) supported concurrent validity. Use of the kappa statistic indicated that CHEOPS yielded the best agreement with the routine decision to treat pain. The younger and older age groups both yielded very good agreement in the PACU but only moderate agreement on the ward. On the basis of data from this study, we recommend CHEOPS as a valid, reliable and practical tool. PMID:11517123

  17. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  18. Accuracy of genomic selection methods in a standard data set of loblolly pine (Pinus taeda L.).

    PubMed

    Resende, M F R; Muñoz, P; Resende, M D V; Garrick, D J; Fernando, R L; Davis, J M; Jokela, E J; Martin, T A; Peter, G F; Kirst, M

    2012-04-01

    Genomic selection can increase genetic gain per generation through early selection. Genomic selection is expected to be particularly valuable for traits that are costly to phenotype and expressed late in the life cycle of long-lived species. Alternative approaches to genomic selection prediction models may perform differently for traits with distinct genetic properties. Here the performance of four different original methods of genomic selection that differ with respect to assumptions regarding distribution of marker effects, including (i) ridge regression-best linear unbiased prediction (RR-BLUP), (ii) Bayes A, (iii) Bayes Cπ, and (iv) Bayesian LASSO are presented. In addition, a modified RR-BLUP (RR-BLUP B) that utilizes a selected subset of markers was evaluated. The accuracy of these methods was compared across 17 traits with distinct heritabilities and genetic architectures, including growth, development, and disease-resistance properties, measured in a Pinus taeda (loblolly pine) training population of 951 individuals genotyped with 4853 SNPs. The predictive ability of the methods was evaluated using a 10-fold, cross-validation approach, and differed only marginally for most method/trait combinations. Interestingly, for fusiform rust disease-resistance traits, Bayes Cπ, Bayes A, and RR-BLUB B had higher predictive ability than RR-BLUP and Bayesian LASSO. Fusiform rust is controlled by few genes of large effect. A limitation of RR-BLUP is the assumption of equal contribution of all markers to the observed variation. However, RR-BLUP B performed equally well as the Bayesian approaches.The genotypic and phenotypic data used in this study are publically available for comparative analysis of genomic selection prediction models. PMID:22271763

  19. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  20. Cross-validation of a mass spectrometric-based method for the therapeutic drug monitoring of irinotecan: implementation of matrix-assisted laser desorption/ionization mass spectrometry in pharmacokinetic measurements.

    PubMed

    Calandra, Eleonora; Posocco, Bianca; Crotti, Sara; Marangon, Elena; Giodini, Luciana; Nitti, Donato; Toffoli, Giuseppe; Traldi, Pietro; Agostini, Marco

    2016-07-01

    Irinotecan is a widely used antineoplastic drug, mostly employed for the treatment of colorectal cancer. This drug is a feasible candidate for therapeutic drug monitoring due to the presence of a wide inter-individual variability in the pharmacokinetic and pharmacodynamic parameters. In order to determine the drug concentration during the administration protocol, we developed a quantitative MALDI-MS method using CHCA as MALDI matrix. Here, we demonstrate that MALDI-TOF can be applied in a routine setting for therapeutic drug monitoring in humans offering quick and accurate results. To reach this aim, we cross validated, according to FDA and EMA guidelines, the MALDI-TOF method in comparison with a standard LC-MS/MS method, applying it for the quantification of 108 patients' plasma samples from a clinical trial. Standard curves for irinotecan were linear (R (2) ≥ 0.9842) over the concentration ranges between 300 and 10,000 ng/mL and showed good back-calculated accuracy and precision. Intra- and inter-day precision and accuracy, determined on three quality control levels were always <12.8 % and between 90.1 and 106.9 %, respectively. The cross-validation procedure showed a good reproducibility between the two methods, the percentage differences within 20 % in more than 70 % of the total amount of clinical samples analysed. PMID:27235158

  1. Quantification of rainfall prediction uncertainties using a cross-validation based technique. Methodology description and experimental validation.

    NASA Astrophysics Data System (ADS)

    Fraga, Ignacio; Cea, Luis; Puertas, Jerónimo; Salsón, Santiago; Petazzi, Alberto

    2016-04-01

    In this paper we present a new methodology to compute rainfall fields including the quantification of predictions uncertainties using raingauge network data. The proposed methodology comprises two steps. Firstly, the ordinary krigging technique is used to determine the estimated rainfall depth in every point of the study area. Then multiple equi-probable errors fields, which comprise both interpolation and measuring uncertainties, are added to the krigged field resulting in multiple rainfall predictions. To compute these error fields first the standard deviation of the krigging estimation is determined following the cross-validation based procedure described in Delrieu et al. (2014). Then, the standard deviation field is sampled using non-conditioned Gaussian random fields. The proposed methodology was applied to study 7 rain events in a 60x60 km area of the west coast of Galicia, in the Northwest of Spain. Due to its location at the junction between tropical and polar regions, the study area suffers from frequent intense rainfalls characterized by a great variability in terms of both space and time. Rainfall data from the tipping bucket raingauge network operated by MeteoGalicia were used to estimate the rainfall fields using the proposed methodology. The obtained predictions were then validated using rainfall data from 3 additional rain gauges installed within the CAPRI project (Probabilistic flood prediction with high resolution hydrologic models from radar rainfall estimates, funded by the Spanish Ministry of Economy and Competitiveness. Reference CGL2013-46245-R.). Results show that both the mean hyetographs and the peak intensities are correctly predicted. The computed hyetographs present a good fit to the experimental data and most of the measured values fall within the 95% confidence intervals. Also, most of the experimental values outside the confidence bounds correspond to time periods of low rainfall depths, where the inaccuracy of the measuring devices

  2. Embedded Performance Validity Measures with Postdeployment Veterans: Cross-Validation and Efficiency with Multiple Measures.

    PubMed

    Shura, Robert D; Miskey, Holly M; Rowland, Jared A; Yoash-Gantz, Ruth E; Denning, John H

    2016-01-01

    Embedded validity measures support comprehensive assessment of performance validity. The purpose of this study was to evaluate the accuracy of individual embedded measures and to reduce them to the most efficient combination. The sample included 212 postdeployment veterans (average age = 35 years, average education = 14 years). Thirty embedded measures were initially identified as predictors of Green's Word Memory Test (WMT) and were derived from the California Verbal Learning Test-Second Edition (CVLT-II), Conners' Continuous Performance Test-Second Edition (CPT-II), Trail Making Test, Stroop, Wisconsin Card Sorting Test-64, the Wechsler Adult Intelligence Scale-Third Edition Letter-Number Sequencing, Rey Complex Figure Test (RCFT), Brief Visuospatial Memory Test-Revised, and the Finger Tapping Test. Eight nonoverlapping measures with the highest area-under-the-curve (AUC) values were retained for entry into a logistic regression analysis. Embedded measure accuracy was also compared to cutoffs found in the existing literature. Twenty-one percent of the sample failed the WMT. Previously developed cutoffs for individual measures showed poor sensitivity (SN) in the current sample except for the CPT-II (Total Errors, SN = .41). The CVLT-II (Trials 1-5 Total) showed the best overall accuracy (AUC = .80). After redundant measures were statistically eliminated, the model included the RCFT (Recognition True Positives), CPT-II (Total Errors), and CVLT-II (Trials 1-5 Total) and increased overall accuracy compared with the CVLT-II alone (AUC = .87). The combination of just 3 measures from the CPT-II, CVLT-II, and RCFT was the most accurate/efficient in predicting WMT performance. PMID:26375185

  3. Cross-validation of the reduced form of the Food Craving Questionnaire-Trait using confirmatory factor analysis

    PubMed Central

    Iani, Luca; Barbaranelli, Claudio; Lombardo, Caterina

    2015-01-01

    Objective: The Food Craving Questionnaire-Trait (FCQ-T) is commonly used to assess habitual food cravings among individuals. Previous studies have shown that a brief version of this instrument (FCQ-T-r) has good reliability and validity. This article is the first to use Confirmatory factor analysis to examine the psychometric properties of the FCQ-T-r in a cross-validation study. Method: Habitual food cravings, as well as emotion regulation strategies, affective states, and disordered eating behaviors, were investigated in two independent samples of non-clinical adult volunteers (Sample 1: N = 368; Sample 2: N = 246). Confirmatory factor analyses were conducted to simultaneously test model fit statistics and dimensionality of the instrument. FCQ-T-r reliability was assessed by computing the composite reliability coefficient. Results: Analysis supported the unidimensional structure of the scale and fit indices were acceptable for both samples. The FCQ-T-r showed excellent reliability and moderate to high correlations with negative affect and disordered eating. Conclusion: Our results indicate that the FCQ-T-r scores can be reliably used to assess habitual cravings in an Italian non-clinical sample of adults. The robustness of these results is tested by a cross-validation of the model using two independent samples. Further research is required to expand on these findings, particularly in children and adolescents. PMID:25918510

  4. Cross-Validation of Magnetic Resonance Elastography and Ultrasound-Based Transient Elastography: A Preliminary Phantom Study

    PubMed Central

    Chen, Jun; Glaser, Kevin J; Miette, Véronique; Sandrin, Laurent; Ehman, Richard L

    2010-01-01

    Purpose To cross-validate two recent noninvasive elastographic techniques, Ultrasound-based Transient Elastography (UTE) and Magnetic Resonance Elastography (MRE). As potential alternatives to liver biopsy, UTE and MRE are undergoing clinical investigations for liver fibrosis diagnosis and liver disease management around the world. These two techniques use tissue stiffness as a marker for disease state and it is important to do a cross-validation study of both elastographic techniques to determine the consistency with which the two techniques can measure the mechanical properties of materials. Materials and Methods In this paper, 19 well-characterized phantoms with a range of stiffness values were measured by two clinical devices (a Fibroscan and a MRE system based respectively on the UTE and MRE techniques) successively with the operators double-blinded. Results Statistical analysis showed that the correlation coefficient was r2=0.93 between MRE and UTE, and there was no evidence of a systematic difference between them within the range of stiffnesses examined. Conclusion These two noninvasive methods, MRE and UTE, provide clinicians with important new options for improving patient care regarding liver diseases in terms of the diagnosis, prognosis, and monitoring of fibrosis progression, as well for evaluating the efficacy of treatment. PMID:19856447

  5. Standardization and cross validation of alloreactive IFNγ ELISPOT assays within the clinical trials in organ transplantation consortium.

    PubMed

    Ashoor, I; Najafian, N; Korin, Y; Reed, E F; Mohanakumar, T; Ikle, D; Heeger, P S; Lin, M

    2013-07-01

    Emerging evidence indicates memory donor-reactive T cells are detrimental to transplant outcome and that quantifying the frequency of IFNγ-producing, donor-reactive PBMCs by ELISPOT has potential utility as an immune monitoring tool. Nonetheless, differences in assay performance among laboratories limit the ability to compare results. In an effort to standardize assays, we prepared a panel of common cellular reagent standards, developed and cross validated a standard operating procedure (SOP) for alloreactive IFNγ ELISPOT assays in several research laboratories supported by the NIH-funded Clinical Trials in Organ Transplantation (CTOT) Consortium. We demonstrate that strict adherence to the SOP and centralized data analysis results in high reproducibility with a coefficient of variance (CV) of ≈ 30%. This standardization of IFNγ ELISPOT assay will facilitate interpretation of data from multicenter transplantation research studies and provide the foundation for developing clinical laboratory testing strategies to guide therapeutic decision-making in transplant patients. PMID:23710568

  6. Apparent behaviour of charged and neutral materials with ellipsoidal fibre distributions and cross-validation of finite element implementations.

    PubMed

    Nagel, Thomas; Kelly, Daniel J

    2012-05-01

    Continuous fibre distribution models can be applied to a variety of biological tissues with both charged and neutral extracellular matrices. In particular, ellipsoidal models have been used to describe the complex material behaviour of tissues such as articular cartilage and their engineered tissue equivalents. The choice of material parameters is more difficult than in classical anisotropic models and the impact that changes to these parameters can have on the predictions of such models are poorly understood. The objective of this study is to demonstrate the apparent behaviour of this class of materials over a range of material parameters. We further introduce a scaling approach to overcome certain counter-intuitive aspects related to the choice of anisotropy parameters and outline the integration method used in our implementations. User material codes for the commercial FE software packages Abaqus and MSC Marc are provided for use by other investigators. Cross-validation of our code against similar implementations in FEBio is also presented. PMID:22498290

  7. Cross-validation of spaceborne radar and ground polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven Matthew

    There is great potential for spaceborne weather radar to make significant observations of the precipitating medium on global scales. The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall in the tropics from space using radar. The Precipitation Radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit at 350 km altitude and 35 degree inclination. The PR is a single frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant, which can be as high as 10--15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR returns. Direct inter-comparison of meteorological measurements between space and ground radar observations can be used to evaluate spaceborne processing algorithms. Though conceptually straightforward, this can be a challenging task. Differences in viewing aspects between space and earth point observations, propagation frequencies, resolution volume size and time synchronization mismatch between measurements can contribute to direct point-by-point inter-comparison errors. The problem is further complicated by spatial geometric distortions induced into the space-based observations caused by the movements and attitude perturbations of the spacecraft itself. A method is developed to align space and ground radar observations so that a point-by-point inter-comparison of measurements can be made. Ground-based polarimetric observations are used to estimate the attenuation of PR signal returns along individual PR beams, and a technique is formulated to determine the true PR return from GR measurements via theoretical modeling of specific attenuation (k) at PR wavelength with ground-based S-band radar observations. The statistical behavior of the parameters

  8. Cross-Validation of the Recumbent Stepper Submaximal Exercise Test to Predict Peak Oxygen Uptake in Older Adults

    PubMed Central

    Herda, Ashley A.; Lentz, Angela A.; Mattlage, Anna E.; Sisante, Jason-Flor

    2014-01-01

    Background Submaximal exercise testing can have a greater application in clinical settings because peak exercise testing is generally not available. In previous work, a prediction equation was developed to estimate peak oxygen consumption (V̇o2) using a total body recumbent stepper (TBRS) and the Young Men's Christian Association (YMCA) protocol in adults who were healthy. Objective The purpose of the present study was to cross-validate the TBRS peak V̇o2 prediction equation in older adults. Design A cross-sectional study was conducted. Methods Thirty participants (22 female, 8 male; mean age=66.8 years, SD=5.52; mean weight=68.51 kg, SD=13.39) who previously completed a peak exercise test and met the inclusion criteria were invited to participate in the cross-validation study. Within 5 days of the peak V̇o2 test, participants completed the TBRS submaximal exercise test. The TBRS submaximal exercise test equation was used to estimate peak V̇o2. The variables in the equation included age, weight, sex, watts (at the end of the submaximal exercise test), and heart rate (at the end of the submaximal exercise test). Results A strong correlation was found between the predicted peak V̇o2 and the measured peak V̇o2. The difference between the values was 0.9 mL·kg−1·min−1, which was not statistically different. The standard error of the estimate was 4.2 mL·kg−1·min−1. Limitations The sample included individuals who volunteered to perform a peak exercise test, which may have biased the results toward those willing to exercise to fatigue. Conclusion The data suggest the TBRS submaximal exercise test and prediction equation can be used to predict peak V̇o2 in older adults. This finding is important for health care professionals wanting to provide information to their patients or clients regarding their fitness level. PMID:24435104

  9. Do different decision-analytic modeling approaches produce different results? A systematic review of cross-validation studies.

    PubMed

    Tsoi, Bernice; Goeree, Ron; Jegathisawaran, Jathishinie; Tarride, Jean-Eric; Blackhouse, Gord; O'Reilly, Daria

    2015-06-01

    When choosing a modeling approach for health economic evaluation, certain criteria are often considered (e.g., population resolution, interactivity, time advancement mechanism, resource constraints). However, whether these criteria and their associated modeling approach impacts results remain poorly understood. A systematic review was conducted to identify cross-validation studies (i.e., modeling a problem using different approaches with the same body of evidence) to offer insight on this topic. With respect to population resolution, reviewed studies suggested that both aggregate- and individual-level models will generate comparable results, although a practical trade-off exists between validity and feasibility. In terms of interactivity, infectious-disease models consistently showed that, depending on the assumptions regarding probability of disease exposure, dynamic and static models may produce dissimilar results with opposing policy recommendations. Empirical evidence on the remaining criteria is limited. Greater discussion will therefore be necessary to promote a deeper understanding of the benefits and limits to each modeling approach. PMID:25728942

  10. An Efficient Leave-One-Out Cross-Validation-Based Extreme Learning Machine (ELOO-ELM) With Minimal User Intervention.

    PubMed

    Shao, Zhifei; Er, Meng Joo; Wang, Ning

    2016-08-01

    It is well known that the architecture of the extreme learning machine (ELM) significantly affects its performance and how to determine a suitable set of hidden neurons is recognized as a key issue to some extent. The leave-one-out cross-validation (LOO-CV) is usually used to select a model with good generalization performance among potential candidates. The primary reason for using the LOO-CV is that it is unbiased and reliable as long as similar distribution exists in the training and testing data. However, the LOO-CV has rarely been implemented in practice because of its notorious slow execution speed. In this paper, an efficient LOO-CV formula and an efficient LOO-CV-based ELM (ELOO-ELM) algorithm are proposed. The proposed ELOO-ELM algorithm can achieve fast learning speed similar to the original ELM without compromising the reliability feature of the LOO-CV. Furthermore, minimal user intervention is required for the ELOO-ELM, thus it can be easily adopted by nonexperts and implemented in automation processes. Experimentation studies on benchmark datasets demonstrate that the proposed ELOO-ELM algorithm can achieve good generalization with limited user intervention while retaining the efficiency feature. PMID:26259254

  11. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  12. Simulating California reservoir operation using the classification and regression-tree algorithm combined with a shuffled cross-validation scheme

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Gao, Xiaogang; Sorooshian, Soroosh; Li, Xin

    2016-03-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as water delivery requirement, environmental constraints, dry/wet conditions, etc. In this paper, a robust reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve CART's predictive performance. An application study of nine major reservoirs in California is carried out. Results produced by the enhanced CART, original CART, and random forest are compared with observation. The statistical measurements show that the enhanced CART and random forest overperform the CART control run in general, and the enhanced CART algorithm gives a better predictive performance over random forest in simulating the peak flows. The results also show that the proposed model is able to consistently and reasonably predict the expert release decisions. Experiments indicate that the release operation in the Oroville Lake is significantly dominated by SWP allocation amount and reservoirs with low elevation are more sensitive to inflow amount than others.

  13. Methane Cross-Validation Between Spaceborne Solar Occultation Observations from ACE-FTS, Spaceborne Nadir Sounding from Gosat, and Ground-Based Solar Absorption Measurements, at a High Arctic Site.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S. A.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2014-12-01

    We present cross-validation of remote sensing observations of methane profiles in the Canadian High Arctic. Methane is the third most important greenhouse gas on Earth, and second only to carbon dioxide in its contribution to anthropogenic global warming. Accurate and precise observations of methane are essential to understand quantitatively its role in the climate system and in global change. The Arctic is a particular region of concern, as melting permafrost and disappearing sea ice might lead to accelerated release of methane into the atmosphere. Global observations require spaceborne instruments, in particular in the Arctic, where surface measurements are sparse and expensive to perform. Satellite-based remote sensing is an underconstrained problem, and specific validation under Arctic circumstances is required. Here, we show a cross-validation between two spaceborne instruments and ground-based measurements, all Fourier Transform Spectrometers (FTS). We consider the Canadian SCISAT ACE-FTS, a solar occultation spectrometer operating since 2004, and the Japanese GOSAT TANSO-FTS, a nadir-pointing FTS operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environmental and Atmospheric Research Laboratory (PEARL) at Eureka, Nunavut (80°N, 86°W) since 2006. Measurements are collocated considering temporal, spatial, and geophysical criteria and regridded to a common vertical grid. We perform smoothing on the higher-resolution instrument results to account for different vertical resolutions. Then, profiles of differences for each pair of instruments are examined. Any bias between instruments, or any accuracy that is worse than expected, needs to be understood prior to using the data. The results of the study will serve as a guideline on how to use the vertically resolved methane products from ACE and

  14. Cross-Validation of a Recently Published Equation Predicting Energy Expenditure to Run or Walk a Mile in Normal-Weight and Overweight Adults

    ERIC Educational Resources Information Center

    Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark

    2014-01-01

    An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…

  15. A Cross-Validation of easyCBM Mathematics Cut Scores in Washington State: 2009-2010 Test. Technical Report #1105

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in the state of Washington. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Washington state…

  16. Cross-Validation of the Behavioral and Emotional Rating Scale-2 Youth Version: An Exploration of Strength-Based Latent Traits

    ERIC Educational Resources Information Center

    Furlong, Michael J.; Sharkey, Jill D.; Boman, Peter; Caldwell, Roslyn

    2007-01-01

    High-quality measurement is a necessary requirement to develop and evaluate the effectiveness of programs that use strength-based principles and strategies. Using independent cross-validation samples, we report two studies that explored the construct validity of the BERS-2 Youth Report, a popular measure designed to assess youth strengths, whose…

  17. Cross-validation of generalised body composition equations with diverse young men and women: the Training Intervention and Genetics of Exercise Response (TIGER) Study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...

  18. A Test and Cross-Validation of the Revised Two-Factor Study Process Questionnaire Factor Structure among Western University Students

    ERIC Educational Resources Information Center

    Immekus, Jason C.; Imbrie, P. K.

    2010-01-01

    The Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a measure of university students' approach to learning. Original evaluation of the scale's psychometric properties was based on a sample of Hong Kong university students' scores. The purpose of this study was to test and cross-validate the R-SPQ-2F factor structure, based on separate…

  19. Latent Structure and Reliability Analysis of the Measure of Body Apperception: Cross-Validation for Head and Neck Cancer Patients

    PubMed Central

    Jean-Pierre, Pascal; Fundakowski, Christopher; Perez, Enrique; Jean-Pierre, Shadae E.; Jean-Pierre, Ashley R.; Melillo, Angelica B.; Libby, Rachel; Sargi, Zoukaa

    2014-01-01

    Purpose Cancer and its treatments are associated with psychological distress that can negatively impact self-perception, psychosocial functioning, and quality of life. Patients with Head and neck cancers (HNC) are particularly susceptible to psychological distress. This study involved a cross-validation of the Measure of Body Apperception (MBA) for HNC patients. Methods One hundred twenty-two English-fluent HNC patients between 20 and 88 years of age completed the MBA on a Likert scale ranging from “1=Disagree” to “4=Agree”. We assessed the latent structure and internal consistency reliability of the MBA using Principal Components Analysis (PCA) and Cronbach's coefficient alpha (α), respectively. We determined convergent and divergent validities of the MBA using correlations with the Hospital Anxiety and Depression Scale (HADS), observer disfigurement rating, and patients’ clinical and demographic variables. Results The PCA revealed a coherent set of items that explained 38% of the variance. The Keiser-Meyer-Olkin measure of sampling adequacy was .73 and the Bartlett's Test of Sphericity was statistically significant (χ2 (28) = 253.64; p < .001), confirming the suitability of the data for dimension reduction analysis. The MBA had good internal consistency reliability (α = .77) and demonstrated adequate convergent and divergent validities based on statistically significant moderate correlations with the HADS (p < .01) and observer rating of disfigurement (p < .026), and non-statistically significant correlations with patients’ clinical and demographic variables: tumor location, age at diagnosis, and birth place (all ps > .05). Conclusions The MBA is a valid and reliable screening measure of body apperception for HNC patients. PMID:22886430

  20. Predicting Chinese Children and Youth's Energy Expenditure Using ActiGraph Accelerometers: A Calibration and Cross-Validation Study

    ERIC Educational Resources Information Center

    Zhu, Zheng; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…

  1. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  2. High-accuracy peptide mass fingerprinting using peak intensity data with machine learning.

    PubMed

    Yang, Dongmei; Ramkissoon, Kevin; Hamlett, Eric; Giddings, Morgan C

    2008-01-01

    For MALDI-TOF mass spectrometry, we show that the intensity of a peptide-ion peak is directly correlated with its sequence, with the residues M, H, P, R, and L having the most substantial effect on ionization. We developed a machine learning approach that exploits this relationship to significantly improve peptide mass fingerprint (PMF) accuracy based on training data sets from both true-positive and false-positive PMF searches. The model's cross-validated accuracy in distinguishing real versus false-positive database search results is 91%, rivaling the accuracy of MS/MS-based protein identification. PMID:17914788

  3. Cross validation of geotechnical and geophysical site characterization methods: near surface data from selected accelerometric stations in Crete (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, C.; Tsangaratos, P.; Rozos, D.; Rondoyianni, Th.; Vafidis, A.; Kritikakis, G.; Steiakakis, M.; Agioutantis, Z.; Savvaidis, A.; Soupios, P.; Papadopoulos, I.; Papadopoulos, N.; Sarris, A.; Mangriotis, M.-D.; Dikmen, U.

    2015-06-01

    The specification of the near surface ground conditions is highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding settlements, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined, through the Thalis ″Geo-Characterization″ project, for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of site characterization data is going to be presented by providing characteristic examples from a total number of thirteen sites. Selected examples present sufficiently the ability, the limitations and the right order of the investigation methods.

  4. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    PubMed

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. PMID:25997434

  5. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  6. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use. PMID:25773127

  7. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556

  8. Electrochemical Performance and Stability of the Cathode for Solid Oxide Fuel Cells. I. Cross Validation of Polarization Measurements by Impedance Spectroscopy and Current-Potential Sweep

    SciTech Connect

    Zhou, Xiao Dong; Pederson, Larry R.; Templeton, Jared W.; Stevenson, Jeffry W.

    2009-12-09

    The aim of this paper is to address three issues in solid oxide fuel cells: (1) cross-validation of the polarization of a single cell measured using both dc and ac approaches, (2) the precise determination of the total areal specific resistance (ASR), and (3) understanding cathode polarization with LSCF cathodes. The ASR of a solid oxide fuel cell is a dynamic property, meaning that it changes with current density. The ASR measured using ac impedance spectroscopy (low frequency interception with real Z´ axis of ac impedance spectrum) matches with that measured from a dc IV sweep (the tangent of dc i-V curve). Due to the dynamic nature of ASR, we found that an ac impedance spectrum measured under open circuit voltage or on a half cell may not represent cathode performance under real operating conditions, particularly at high current density. In this work, the electrode polarization was governed by the cathode activation polarization; the anode contribution was negligible.

  9. Robustness of two single-item self-esteem measures: cross-validation with a measure of stigma in a sample of psychiatric patients.

    PubMed

    Bagley, Christopher

    2005-08-01

    Robins' Single-item Self-esteem Inventory was compared with a single item from the Coopersmith Self-esteem. Although a new scoring format was used, there was good evidence of cross-validation in 83 current and former psychiatric patients who completed Harvey's adapted measure of stigma felt and experienced by users of mental health services. Scores on the two single-item self-esteem measures correlated .76 (p < .001), .76 and .71 with scores on the longer scales from which they were taken, and .58 and .53, respectively, with Harvey's adapted stigma scale. Complex and perhaps competing models may explain links between felt stigma and poorer self-esteem in users of mental health services. PMID:16350637

  10. Parallel processing of chemical information in a local area network--II. A parallel cross-validation procedure for artificial neural networks.

    PubMed

    Derks, E P; Beckers, M L; Melssen, W J; Buydens, L M

    1996-08-01

    This paper describes a parallel cross-validation (PCV) procedure, for testing the predictive ability of multi-layer feed-forward (MLF) neural networks models, trained by the generalized delta learning rule. The PCV program has been parallelized to operate in a local area computer network. Development and execution of the parallel application was aided by the HYDRA programming environment, which is extensively described in Part I of this paper. A brief theoretical introduction on MLF networks is given and the problems, associated with the validation of predictive abilities, will be discussed. Furthermore, this paper comprises a general outline of the PCV program. Finally, the parallel PCV application is used to validate the predictive ability of an MLF network modeling a chemical non-linear function approximation problem which is described extensively in the literature. PMID:8799999

  11. SILAC-Pulse Proteolysis: A Mass Spectrometry-Based Method for Discovery and Cross-Validation in Proteome-Wide Studies of Ligand Binding

    NASA Astrophysics Data System (ADS)

    Adhikari, Jagat; Fitzgerald, Michael C.

    2014-12-01

    Reported here is the use of stable isotope labeling with amino acids in cell culture (SILAC) and pulse proteolysis (PP) for detection and quantitation of protein-ligand binding interactions on the proteomic scale. The incorporation of SILAC into PP enables the PP technique to be used for the unbiased detection and quantitation of protein-ligand binding interactions in complex biological mixtures (e.g., cell lysates) without the need for prefractionation. The SILAC-PP technique is demonstrated in two proof-of-principle experiments using proteins in a yeast cell lysate and two test ligands including a well-characterized drug, cyclosporine A (CsA), and a non-hydrolyzable adenosine triphosphate (ATP) analogue, adenylyl imidodiphosphate (AMP-PNP). The well-known tight-binding interaction between CsA and cyclophilin A was successfully detected and quantified in replicate analyses, and a total of 33 proteins from a yeast cell lysate were found to have AMP-PNP-induced stability changes. In control experiments, the method's false positive rate of protein target discovery was found to be in the range of 2.1% to 3.6%. SILAC-PP and the previously reported stability of protein from rates of oxidation (SPROX) technique both report on the same thermodynamic properties of proteins and protein-ligand complexes. However, they employ different probes and mass spectrometry-based readouts. This creates the opportunity to cross-validate SPROX results with SILAC-PP results, and vice-versa. As part of this work, the SILAC-PP results obtained here were cross-validated with previously reported SPROX results on the same model systems to help differentiate true positives from false positives in the two experiments.

  12. Nine scoring models for short-term mortality in alcoholic hepatitis: cross-validation in a biopsy-proven cohort

    PubMed Central

    Papastergiou, V; Tsochatzis, E A; Pieri, G; Thalassinos, E; Dhar, A; Bruno, S; Karatapanis, S; Luong, T V; O'Beirne, J; Patch, D; Thorburn, D; Burroughs, A K

    2014-01-01

    Background Several prognostic models have emerged in alcoholic hepatitis (AH), but lack of external validation precludes their universal use. Aim To validate the Maddrey Discriminant Function (DF); Glasgow Alcoholic Hepatitis Score (GAHS); Mayo End-stage Liver Disease (MELD); Age, Bilirubin, INR, Creatinine (ABIC); MELD-Na, UK End-stage Liver Disease (UKELD), and three scores of corticosteroid response at 1 week: an Early Change in Bilirubin Levels (ECBL), a 25% fall in bilirubin, and the Lille score. Methods Seventy-one consecutive patients with biopsy-proven AH, admitted between November 2007-September 2011, were evaluated. The clinical and biochemical parameters were analysed to assess prognostic models with respect to 30- and 90-day mortality. Results There were no significant differences in the areas under the receiver operating characteristics curve (AUROCs) relative to 30-day/90-day mortality: MELD 0.79/0.84, DF 0.71/0.74, GAHS 0.75/0.78, ABIC 0.71/0.78, MELD-Na 0.68/0.76, UKELD 0.56/0.68. One-week rescoring yielded a trend towards improved predictive accuracies (30-day/90-day AUROCs: 0.69–0.84/0.77–0.86). In patients with admission DF ≥32 (n = 31), response to corticosteroids according to ECBL, 25% fall in bilirubin and the Lille model yielded AUROCs of 0.73/0.73, 0.78/0.72 and 0.81/0.82 for a 30-day/90-day outcome respectively. All models showed excellent negative predictive values (NPVs; range: 86–100%), while the positive ones were low (range: 17–50%). Conclusions MELD, DF, GAHS, ABIC and scores of corticosteroid response proved to be valid in an independent cohort of biopsy-proven alcoholic hepatitis. MELD modifications incorporating sodium did not confer any prognostic advantage over classical MELD. Based on excellent NPVs, the models are best to identify patients at low risk of death. PMID:24612165

  13. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding

    PubMed Central

    Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  14. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  15. Cross validation of gas chromatography-flame photometric detection and gas chromatography-mass spectrometry methods for measuring dialkylphosphate metabolites of organophosphate pesticides in human urine.

    PubMed

    Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya

    2014-01-01

    We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce

  16. Neural network accuracy measures and data transforms applied to the seismic parameter estimation problem

    SciTech Connect

    Glover, C.W.; Barhen, J.; Aminzadeh, F.; Toomarian, N.B.

    1997-01-01

    The accuracy of an artificial neural network (ANN) algorithm is a crucial issue in the estimation of an oil field reservoir`s properties from remotely sensed seismic data. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bounds on an ANN`s accuracy statistic from a finite sample set. In addition, we also show that an ANN`s classification accuracy is dramatically improved by transforming the ANN`s input feature space to a dimensionally smaller, new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN`s convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These techniques for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data.

  17. FDDS: A Cross Validation Study.

    ERIC Educational Resources Information Center

    Sawyer, Judy Parsons

    The Family Drawing Depression Scale (FDDS) was created by Wright and McIntyre to provide a clear and reliable scoring method for the Kinetic Family Drawing as a procedure for detecting depression. A study was conducted to confirm the value of the FDDS as a systematic tool for interpreting family drawings with populations of depressed individuals.…

  18. Cross-validation of IFN-γ Elispot assay for measuring alloreactive memory/effector T cell responses in renal transplant recipients.

    PubMed

    Bestard, O; Crespo, E; Stein, M; Lúcia, M; Roelen, D L; de Vaal, Y J; Hernandez-Fuentes, M P; Chatenoud, L; Wood, K J; Claas, F H; Cruzado, J M; Grinyó, J M; Volk, H D; Reinke, P

    2013-07-01

    Assessment of donor-specific alloreactive memory/effector T cell responses using an IFN-γ Elispot assay has been suggested to be a novel immune-monitoring tool for evaluating the cellular immune risk in renal transplantation. Here, we report the cross-validation data of the IFN-γ Elispot assay performed within different European laboratories taking part of the EU RISET consortium. For this purpose, development of a standard operating procedure (SOP), comparisons of lectures of IFN-γ plates assessing intra- and interlaboratory assay variability of allogeneic or peptide stimuli in both healthy and kidney transplant individuals have been the main objectives. We show that the use of a same SOP and count-settings of the Elispot bioreader allow low coefficient variation between laboratories. Frozen and shipped samples display slightly lower detectable IFN-γ frequencies than fresh samples. Importantly, a close correlation between different laboratories is obtained when measuring high frequencies of antigen-specific primed/memory T cell alloresponses. Interestingly, significant high donor-specific alloreactive T cell responses can be similarly detected among different laboratories in kidney transplant patients displaying histological patterns of acute T cell mediated rejection. In conclusion, assessment of circulating alloreactive memory/effector T cells using an INF-γ Elispot assay can be accurately achieved using the same SOP, Elispot bioreader and experienced technicians in kidney transplantation. PMID:23763435

  19. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7-8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  20. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7–8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  1. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C., Jr.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  2. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  3. Methane cross-validation between three Fourier transform spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, Gerrit; Walker, Kaley A.; Conway, Stephanie; Saitoh, Naoko; Boone, Chris D.; Strong, Kimberly; Drummond, James R.

    2016-05-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three data sets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier transform spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier transform infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Laboratory at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional collocation criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and

  4. Methane cross-validation between three Fourier Transform Spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2015-12-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three datasets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier Transform Spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Lab at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and between 1.5 and 3

  5. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  6. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  7. Influence of outliers on accuracy estimation in genomic prediction in plant breeding.

    PubMed

    Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter

    2014-12-01

    Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. PMID:25273862

  8. Determination of snow avalanche return periods using a tree-ring based reconstruction in the French Alps: cross validation with the predictions of a statistical-dynamical model

    NASA Astrophysics Data System (ADS)

    Schläppy, Romain; Eckert, Nicolas; Jomelli, Vincent; Grancher, Delphine; Brunstein, Daniel; Stoffel, Markus; Naaim, Mohamed

    2013-04-01

    rare events, i.e. to the tail of the local runout distance distribution. Furthermore, a good agreement exists with the statistical-numerical model's prediction, i.e. a 10-40 m difference for return periods ranging between 10 and 300 years, which is rather small with regards to the uncertainty levels to be considered in avalanche probabilistic modeling and dendrochronological reconstructions. It is important to note that such a cross validation on independent extreme predictions has never been undertaken before. It suggest that i) dendrochronological reconstruction can provide valuable information for anticipating future extreme avalanche events in the context of risk management, and, in turn, that ii) the statistical-numerical model, while properly calibrated, can be used with reasonable confidence to refine these predictions, with for instance evaluation of pressure and flow depth distributions at each position of the runout zone. A strong sensitivity to the determination of local avalanche and dendrological record frequencies is however highlighted, indicating that this step is an essential step for an accurate probabilistic characterization of large-extent events.

  9. Evaluating the accuracy of diffusion MRI models in white matter.

    PubMed

    Rokem, Ariel; Yeatman, Jason D; Pestilli, Franco; Kay, Kendrick N; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking. PMID:25879933

  10. Evaluating the Accuracy of Diffusion MRI Models in White Matter

    PubMed Central

    Rokem, Ariel; Yeatman, Jason D.; Pestilli, Franco; Kay, Kendrick N.; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A.

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking. PMID:25879933

  11. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  12. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  13. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  14. Atomic-accuracy models from 4.5-Å cryo-electron microscopy data with density-guided iterative local refinement.

    PubMed

    DiMaio, Frank; Song, Yifan; Li, Xueming; Brunner, Matthias J; Xu, Chunfu; Conticello, Vincent; Egelman, Edward; Marlovits, Thomas C; Cheng, Yifan; Baker, David

    2015-04-01

    We describe a general approach for refining protein structure models on the basis of cryo-electron microscopy maps with near-atomic resolution. The method integrates Monte Carlo sampling with local density-guided optimization, Rosetta all-atom refinement and real-space B-factor fitting. In tests on experimental maps of three different systems with 4.5-Å resolution or better, the method consistently produced models with atomic-level accuracy largely independently of starting-model quality, and it outperformed the molecular dynamics-based MDFF method. Cross-validated model quality statistics correlated with model accuracy over the three test systems. PMID:25707030

  15. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  16. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  17. Predicting the likelihood of future sexual recidivism: pilot study findings from a California sex offender risk project and cross-validation of the Static-99.

    PubMed

    Sreenivasan, Shoba; Garrick, Thomas; Norris, Randall; Cusworth-Walker, Sarah; Weinberger, Linda E; Essres, Garrett; Turner, Susan; Fain, Terry

    2007-01-01

    Pilot findings on 137 California sex offenders followed up over 10 years after release from custody (excluding cases in which legal jurisdiction expired) are presented. The sexual recidivism rate, very likely inflated by sample selection, was 31 percent at five years and 40 percent at 10 years. Cumulatively, markers of sexual deviance (multiple victim types) and criminality (prior parole violations and prison terms) led to improved prediction of sexual recidivism (receiver operating characteristic [ROC] = .71, r = .46) than singly (multiple victim types: ROC = .60, r = .31; prior parole violations and prison terms: ROC = .66, r = .37). Long-term Static-99 statistical predictive accuracy for sexual recidivism was lower in our sample (ROC = .62, r =.24) than the values presented in the developmental norms. Sexual recidivism rates were higher in our study for Static-99 scores of 2 and 3 than in the developmental sample, and lower for scores of 4 and 6. Given failures to replicate developmental norms, the Static-99 method of ranking sexual recidivism risk warrants caution when applied to individual offenders. PMID:18086738

  18. Multiple dimensions of health locus of control in a representative population sample: ordinal factor analysis and cross-validation of an existing three and a new four factor model

    PubMed Central

    2011-01-01

    Background Based on the general approach of locus of control, health locus of control (HLOC) concerns control-beliefs due to illness, sickness and health. HLOC research results provide an improved understanding of health related behaviour and patients' compliance in medical care. HLOC research distinguishes between beliefs due to Internality, Externality powerful Others (POs) and Externality Chance. However, evidences for differentiating the POs dimension were found. Previous factor analyses used selected and predominantly clinical samples, while non-clinical studies are rare. The present study is the first analysis of the HLOC structure based on a large representative general population sample providing important information for non-clinical research and public health care. Methods The standardised German questionnaire which assesses HLOC was used in a representative adult general population sample for a region in Northern Germany (N = 4,075). Data analyses used ordinal factor analyses in LISREL and Mplus. Alternative theory-driven models with one to four latent variables were compared using confirmatory factor analysis. Fit indices, chi-square difference tests, residuals and factor loadings were considered for model comparison. Exploratory factor analysis was used for further model development. Results were cross-validated splitting the total sample randomly and using the cross-validation index. Results A model with four latent variables (Internality, Formal Help, Informal Help and Chance) best represented the HLOC construct (three-dimensional model: normed chi-square = 9.55; RMSEA = 0.066; CFI = 0.931; SRMR = 0.075; four-dimensional model: normed chi-square = 8.65; RMSEA = 0.062; CFI = 0.940; SRMR = 0.071; chi-square difference test: p < 0.001). After excluding one item, the superiority of the four- over the three-dimensional HLOC construct became very obvious (three-dimensional model: normed chi-square = 7.74; RMSEA = 0.059; CFI = 0.950; SRMR = 0.079; four

  19. Near surface geotechnical and geophysical data cross validated for site characterization applications. The cases of selected accelerometric stations in Crete island (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, Constantinos; Tsangaratos, Paraskevas; Rozos, Dimitrios; Rondoyianni, Theodora; Vafidis, Antonis; Steiakakis, Emanouil; Agioutantis, Zacharias; Savvaidis, Alexandros; Soupios, Pantelis; Papadopoulos, Ioannis; Papadopoulos, Nikos; Sarris, Apostolos; Mangriotis, Maria-Dafni; Dikmen, Unal

    2015-04-01

    The near surface ground conditions are highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding deformations, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1-D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of the site characterization data in regards to the applied investigation techniques is presented by providing characteristic examples from the total number of thirteen sites. As an example of the gradual improvement of the knowledge about the ground conditions the case of AGN1 strong motion station, located at Agios Nikolaos city (Eastern Crete), is briefly presented. According to the medium scale geological map of IGME the station was supposed to be founded over limestone. The detailed geological mapping reveled that a few meters of loose alluvial deposits occupy the area, expected

  20. Direct spectral analysis of tea samples using 266 nm UV pulsed laser-induced breakdown spectroscopy and cross validation of LIBS results with ICP-MS.

    PubMed

    Gondal, M A; Habibullah, Y B; Baig, Umair; Oloore, L E

    2016-05-15

    Tea is one of the most common and popular beverages spanning vast array of cultures all over the world. The main nutritional benefits of drinking tea are its anti-oxidant properties, presumed protection against certain cancers, inhibition of inflammation and possible protective effects against diabetes. Laser induced breakdown spectrometer (LIBS) was assembled as a powerful tool for qualitative and quantitative analysis of various brands of tea samples using 266 nm pulsed UV laser. LIBS spectra for six brands of tea samples in the wavelength range of 200-900 nm was recorded and all elements present in our tea samples were identified. The major toxic elements detected in several brands of tea samples were bromine, chromium and minerals like iron, calcium, potassium and silicon. The spectral assignment was conducted prior to the determination of concentration of each element. For quantitative analysis, calibration curves were drawn for each element using standard samples prepared in known concentration in the tea matrix. The plasma parameters (electron temperature and electron density) were also determined prior to the tea samples spectroscopic analysis. The concentration of iron, chromium, potassium, bromine, copper, silicon and calcium detected in all tea samples was between 378-656, 96-124, 1421-6785, 99-1476, 17-36, 2-11 and 92-130 mg L(-1) respectively. The limits of detection estimated for Fe, Cr, K, Br, Cu, Si, Ca in tea samples were 22, 12, 14, 11, 6, 1 and 12 mg L(-1) respectively. To further confirm the accuracy of our LIBS results, we determined the concentration of each element present in tea samples by using standard analytical technique like ICP-MS. The concentrations detected with our LIBS system are in excellent agreement with ICP-MS results. The system assembled for spectral analysis in this work could be highly applicable for testing the quality and purity of food and also pharmaceuticals products. PMID:26992530

  1. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  2. Using Genetic Distance to Infer the Accuracy of Genomic Prediction.

    PubMed

    Scutari, Marco; Mackay, Ian; Balding, David

    2016-09-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  3. Feasibility and Diagnostic Accuracy of Ischemic Stroke Territory Recognition Based on Two-Dimensional Projections of Three-Dimensional Diffusion MRI Data

    PubMed Central

    Wrosch, Jana Katharina; Volbers, Bastian; Gölitz, Philipp; Gilbert, Daniel Frederic; Schwab, Stefan; Dörfler, Arnd; Kornhuber, Johannes; Groemer, Teja Wolfgang

    2015-01-01

    This study was conducted to assess the feasibility and diagnostic accuracy of brain artery territory recognition based on geoprojected two-dimensional maps of diffusion MRI data in stroke patients. In this retrospective study, multiplanar diffusion MRI data of ischemic stroke patients was used to create a two-dimensional map of the entire brain. To guarantee correct representation of the stroke, a computer-aided brain artery territory diagnosis was developed and tested for its diagnostic accuracy. The test recognized the stroke-affected brain artery territory based on the position of the stroke in the map. The performance of the test was evaluated by comparing it to the reference standard of each patient’s diagnosed stroke territory on record. This study was designed and conducted according to Standards for Reporting of Diagnostic Accuracy (STARD). The statistical analysis included diagnostic accuracy parameters, cross-validation, and Youden Index optimization. After cross-validation on a cohort of 91 patients, the sensitivity of this territory diagnosis was 81% with a specificity of 87%. With this, the projection of strokes onto a two-dimensional map is accurate for representing the affected stroke territory and can be used to provide a static and printable overview of the diffusion MRI data. The projected map is compatible with other two-dimensional data such as EEG and will serve as a useful visualization tool. PMID:26635717

  4. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  5. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  6. Hydroxylation of the eukaryotic ribosomal decoding center affects translational accuracy

    PubMed Central

    Loenarz, Christoph; Sekirnik, Rok; Thalhammer, Armin; Ge, Wei; Spivakovsky, Ekaterina; Mackeen, Mukram M.; McDonough, Michael A.; Cockman, Matthew E.; Kessler, Benedikt M.; Ratcliffe, Peter J.; Wolf, Alexander; Schofield, Christopher J.

    2014-01-01

    The mechanisms by which gene expression is regulated by oxygen are of considerable interest from basic science and therapeutic perspectives. Using mass spectrometric analyses of Saccharomyces cerevisiae ribosomes, we found that the amino acid residue in closest proximity to the decoding center, Pro-64 of the 40S subunit ribosomal protein Rps23p (RPS23 Pro-62 in humans) undergoes posttranslational hydroxylation. We identify RPS23 hydroxylases as a highly conserved eukaryotic subfamily of Fe(II) and 2-oxoglutarate dependent oxygenases; their catalytic domain is closely related to transcription factor prolyl trans-4-hydroxylases that act as oxygen sensors in the hypoxic response in animals. The RPS23 hydroxylases in S. cerevisiae (Tpa1p), Schizosaccharomyces pombe and green algae catalyze an unprecedented dihydroxylation modification. This observation contrasts with higher eukaryotes, where RPS23 is monohydroxylated; the human Tpa1p homolog OGFOD1 catalyzes prolyl trans-3-hydroxylation. TPA1 deletion modulates termination efficiency up to ∼10-fold, including of pathophysiologically relevant sequences; we reveal Rps23p hydroxylation as its molecular basis. In contrast to most previously characterized accuracy modulators, including antibiotics and the prion state of the S. cerevisiae translation termination factor eRF3, Rps23p hydroxylation can either increase or decrease translational accuracy in a stop codon context-dependent manner. We identify conditions where Rps23p hydroxylation status determines viability as a consequence of nonsense codon suppression. The results reveal a direct link between oxygenase catalysis and the regulation of gene expression at the translational level. They will also aid in the development of small molecules altering translational accuracy for the treatment of genetic diseases linked to nonsense mutations. PMID:24550462

  7. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. PMID:25800943

  8. Atomic accuracy models from 4.5 Å cryo-electron microscopy data with density-guided iterative local refinement

    PubMed Central

    Li, Xueming; Brunner, Matthias J.; Xu, Chunfu; Conticello, Vincent; Egelman, Edward; Marlovits, Thomas; Cheng, Yifan; Baker, David

    2015-01-01

    Direct electron detectors have made it possible to generate electron density maps at near atomic resolution using cryo-electron microscopy single particle reconstructions. Critical current questions include how best to build models into these maps, how high quality a map is required to generate an accurate model, and how to cross-validate models in a system independent way. We describe a modeling approach that integrates Monte Carlo optimization with local density guided moves, Rosetta all-atom refinement, and real space B-factor fitting, yielding accurate models from experimental maps for three different systems with resolutions 4.5 Å or higher. We characterize model accuracy as a function of data quality, and present a model validation statistic that correlates with model accuracy over the three test systems. PMID:25707030

  9. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  10. Balancing Accuracy and Cost of Confinement Simulations by Interpolation and Extrapolation of Confinement Energies.

    PubMed

    Villemot, François; Capelli, Riccardo; Colombo, Giorgio; van der Vaart, Arjan

    2016-06-14

    Improvements to the confinement method for the calculation of conformational free energy differences are presented. By taking advantage of phase space overlap between simulations at different frequencies, significant gains in accuracy and speed are reached. The optimal frequency spacing for the simulations is obtained from extrapolations of the confinement energy, and relaxation time analysis is used to determine time steps, simulation lengths, and friction coefficients. At postprocessing, interpolation of confinement energies is used to significantly reduce discretization errors in the calculation of conformational free energies. The efficiency of this protocol is illustrated by applications to alanine n-peptides and lactoferricin. For the alanine-n-peptide, errors were reduced between 2- and 10-fold and sampling times between 8- and 67-fold, while for lactoferricin the long sampling times at low frequencies were reduced 10-100-fold. PMID:27120438

  11. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  12. Cross-validation and evaluation of the performance of methods for the elemental analysis of forensic glass by μ-XRF, ICP-MS, and LA-ICP-MS.

    PubMed

    Trejos, Tatiana; Koons, Robert; Becker, Stefan; Berman, Ted; Buscaglia, JoAnn; Duecking, Marc; Eckert-Lumsdon, Tiffany; Ernst, Troy; Hanlon, Christopher; Heydon, Alex; Mooney, Kim; Nelson, Randall; Olsson, Kristine; Palenik, Christopher; Pollock, Edward Chip; Rudell, David; Ryland, Scott; Tarifa, Anamary; Valadez, Melissa; Weis, Peter; Almirall, Jose

    2013-06-01

    Elemental analysis of glass was conducted by 16 forensic science laboratories, providing a direct comparison between three analytical methods [micro-x-ray fluorescence spectroscopy (μ-XRF), solution analysis using inductively coupled plasma mass spectrometry (ICP-MS), and laser ablation inductively coupled plasma mass spectrometry]. Interlaboratory studies using glass standard reference materials and other glass samples were designed to (a) evaluate the analytical performance between different laboratories using the same method, (b) evaluate the analytical performance of the different methods, (c) evaluate the capabilities of the methods to correctly associate glass that originated from the same source and to correctly discriminate glass samples that do not share the same source, and (d) standardize the methods of analysis and interpretation of results. Reference materials NIST 612, NIST 1831, FGS 1, and FGS 2 were employed to cross-validate these sensitive techniques and to optimize and standardize the analytical protocols. The resulting figures of merit for the ICP-MS methods include repeatability better than 5% RSD, reproducibility between laboratories better than 10% RSD, bias better than 10%, and limits of detection between 0.03 and 9 μg g(-1) for the majority of the elements monitored. The figures of merit for the μ-XRF methods include repeatability better than 11% RSD, reproducibility between laboratories after normalization of the data better than 16% RSD, and limits of detection between 5.8 and 7,400 μg g(-1). The results from this study also compare the analytical performance of different forensic science laboratories conducting elemental analysis of glass evidence fragments using the three analytical methods. PMID:23673570

  13. Measurement of Phospholipids May Improve Diagnostic Accuracy in Ovarian Cancer

    PubMed Central

    Davis, Lorelei; Han, Gang; Zhu, Weiwei; Molina, Ashley D.; Arango, Hector; LaPolla, James P.; Hoffman, Mitchell S.; Sellers, Thomas; Kirby, Tyler; Nicosia, Santo V.; Sutphen, Rebecca

    2012-01-01

    Background More than two-thirds of women who undergo surgery for suspected ovarian neoplasm do not have cancer. Our previous results suggest phospholipids as potential biomarkers of ovarian cancer. In this study, we measured the serum levels of multiple phospholipids among women undergoing surgery for suspected ovarian cancer to identify biomarkers that better predict whether an ovarian mass is malignant. Methodology/Principal Findings We obtained serum samples preoperatively from women with suspected ovarian cancer enrolled through a prospective, population-based rapid ascertainment system. Samples were analyzed from all women in whom a diagnosis of epithelial ovarian cancer (EOC) was confirmed and from benign disease cases randomly selected from the remaining (non-EOC) samples. We measured biologically relevant phospholipids using liquid chromatography/electrospray ionization mass spectrometry. We applied a powerful statistical and machine learning approach, Hybrid huberized support vector machine (HH-SVM) to prioritize phospholipids to enter the biomarker models, and used cross-validation to obtain conservative estimates of classification error rates. Results The HH-SVM model using the measurements of specific combinations of phospholipids supplements clinical CA125 measurement and improves diagnostic accuracy. Specifically, the measurement of phospholipids improved sensitivity (identification of cases with preoperative CA125 levels below 35) among two types of cases in which CA125 performance is historically poor - early stage cases and those of mucinous histology. Measurement of phospholipids improved the identification of early stage cases from 65% (based on CA125) to 82%, and mucinous cases from 44% to 88%. Conclusions/Significance Levels of specific serum phospholipids differ between women with ovarian cancer and those with benign conditions. If validated by independent studies in the future, these biomarkers may serve as an adjunct at the time of clinical

  14. Correlates of Near-Infrared Spectroscopy Brain–Computer Interface Accuracy in a Multi-Class Personalization Framework

    PubMed Central

    Weyand, Sabine; Chau, Tom

    2015-01-01

    Brain–computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS–BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual’s best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson’s) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = −0.749). PMID:26483657

  15. Correlates of Near-Infrared Spectroscopy Brain-Computer Interface Accuracy in a Multi-Class Personalization Framework.

    PubMed

    Weyand, Sabine; Chau, Tom

    2015-01-01

    Brain-computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS-BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual's best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson's) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = -0.749). PMID:26483657

  16. Meditation Experience Predicts Introspective Accuracy

    PubMed Central

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  17. Revealing latent value of clinically acquired CTs of traumatic brain injury through multi-atlas segmentation in a retrospective study of 1,003 with external cross-validation

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Kelly, Patrick D.; Asman, Andrew J.; Kang, Hakmook; Patel, Mayur B.; Landman, Bennett A.

    2015-03-01

    Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold crossvalidation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R2 to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.

  18. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  19. Evaluating LANDSAT wildland classification accuracies

    NASA Technical Reports Server (NTRS)

    Toll, D. L.

    1980-01-01

    Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.

  20. The accuracy of automatic tracking

    NASA Technical Reports Server (NTRS)

    Kastrov, V. V.

    1974-01-01

    It has been generally assumed that tracking accuracy changes similarly to the rate of change of the curve of the measurement conversion. The problem that internal noise increases along with the signals processed by the tracking device and that tracking accuracy thus drops were considered. The main prerequisite for solution is consideration of the dependences of the output signal of the tracking device sensor not only on the measured parameter but on the signal itself.

  1. Improving the prediction accuracy of protein structural class: approached with alternating word frequency and normalized Lempel-Ziv complexity.

    PubMed

    Zhang, Shengli; Liang, Yunyun; Yuan, Xiguo

    2014-01-21

    Prediction of protein structural class for low-similarity sequences remains a challenging problem. In this study, the new computational method has been developed to predict protein structural class by incorporating alternating word frequency and normalized Lempel-Ziv complexity. To evaluate the performance of the proposed method, jackknife cross-validation tests are performed on three widely used benchmark datasets, 25PDB, 1189 and 640, respectively. We report 83.6%, 81.8% and 83.6% prediction accuracies for 25PDB, 1189 and 640 benchmarks, respectively. Comparison of our results with other methods shows that the proposed method is very promising and may provide a cost-effective alternative to predict protein structural class in particular for low-similarity datasets and may at least play an important complementary role to existing methods. PMID:24140787

  2. Accuracy of prediction of genomic breeding values for residual feed intake and carcass and meat quality traits in Bos taurus, Bos indicus, and composite beef cattle.

    PubMed

    Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E

    2013-07-01

    The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait. PMID:23658330

  3. Combined information from Raman spectroscopy and optical coherence tomography for enhanced diagnostic accuracy in tissue discrimination

    NASA Astrophysics Data System (ADS)

    Ashok, Praveen C.; Praveen, Bavishna B.; Bellini, Nicola; Riches, Andrew; Dholakia, Kishan; Herrington, C. Simon

    2014-03-01

    Optical spectroscopy and imaging methods have proved to have potential to discriminate between normal and abnormal tissue types through minimally invasive procedures. Raman spectroscopy and Optical Coherence Tomography (OCT) provides chemical and morphological information of tissues respectively, which are complementary to each other. When used individually they might not be able to obtain high enough sensitivity and specificity that is clinically relevant. In this study we combined Raman spectroscopy information with information obtained from OCT to enhance the sensitivity and specificity in discriminating between Colonic Adenocarcinoma from Normal Colon. OCT being an imaging technique, the information from this technique is conventionally analyzed qualitatively. To combine with Raman spectroscopy information, it was essential to quantify the morphological information obtained from OCT. Texture analysis was used to extract information from OCT images, which in-turn was combined with the information obtained from Raman spectroscopy. The sensitivity and specificity of the classifier was estimated using leave one out cross validation (LOOCV) method where support vector machine (SVM) was used for binary classification of the tissues. The sensitivity obtained using Raman spectroscopy and OCT individually was 89% and 78% respectively and the specificity was 77% and 74% respectively. Combining the information derived using the two techniques increased both sensitivity and specificity to 94% demonstrating that combining complementary optical information enhances diagnostic accuracy. These results demonstrate that a multimodal approach using Raman-OCT would be able to enhance the diagnostic accuracy for identifying normal and cancerous tissue types.

  4. Accuracy comparison of spatial interpolation methods for estimation of air temperatures in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Shim, K.; Jung, M.; Kim, S.

    2013-12-01

    Because of complex terrain, micro- as well as meso-climate variability is extreme by locations in Korea. In particular, air temperature of agricultural fields are influenced by topographic features of the surroundings making accurate interpolation of regional meteorological data from point-measured data. This study was conducted to compare accuracy of a spatial interpolation method to estimate air temperature in Korean Peninsula with the rugged terrains in South Korea. Four spatial interpolation methods including Inverse Distance Weighting (IDW), Spline, Kriging and Cokriging were tested to estimate monthly air temperature of unobserved stations. Monthly measured data sets (minimum and maximum air temperature) from 456 automatic weather station (AWS) locations in South Korea were used to generate the gridded air temperature surface. Result of cross validation showed that using Exponential theoretical model produced a lower root mean square error (RMSE) than using Gaussian theoretical model in case of Kriging and Cokriging and Spline produced the lowest RMSE of spatial interpolation methods in both maximum and minimum air temperature estimation. In conclusion, Spline showed the best accuracy among the methods, but further experiments which reflect topography effects such as temperature lapse rate are necessary to improve the prediction.

  5. Relatedness severely impacts accuracy of marker-assisted selection for disease resistance in hybrid wheat

    PubMed Central

    Gowda, M; Zhao, Y; Würschum, T; Longin, C FH; Miedaner, T; Ebmeyer, E; Schachschneider, R; Kazman, E; Schacht, J; Martinant, J-P; Mette, M F; Reif, J C

    2014-01-01

    The accuracy of genomic selection depends on the relatedness between the members of the set in which marker effects are estimated based on evaluation data and the types for which performance is predicted. Here, we investigate the impact of relatedness on the performance of marker-assisted selection for fungal disease resistance in hybrid wheat. A large and diverse mapping population of 1739 elite European winter wheat inbred lines and hybrids was evaluated for powdery mildew, leaf rust and stripe rust resistance in multi-location field trials and fingerprinted with 9 k and 90 k SNP arrays. Comparison of the accuracies of prediction achieved with data sets from the two marker arrays revealed a crucial role for a sufficiently high marker density in genome-wide association mapping. Cross-validation studies using test sets with varying degrees of relationship to the corresponding estimation sets revealed that close relatedness leads to a substantial increase in the proportion of total genotypic variance explained by the identified QTL and consequently to an overoptimistic judgment of the precision of marker-assisted selection. PMID:24346498

  6. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  7. Current Concept of Geometrical Accuracy

    NASA Astrophysics Data System (ADS)

    Görög, Augustín; Görögová, Ingrid

    2014-06-01

    Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutteŕs geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.

  8. ACCURACY AND TRACE ORGANIC ANALYSES

    EPA Science Inventory

    Accuracy in trace organic analysis presents a formidable problem to the residue chemist. He is confronted with the analysis of a large number and variety of compounds present in a multiplicity of substrates at levels as low as parts-per-trillion. At these levels, collection, isol...

  9. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  10. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually. PMID:21923052

  11. Psychology Textbooks: Examining Their Accuracy

    ERIC Educational Resources Information Center

    Steuer, Faye B.; Ham, K. Whitfield, II

    2008-01-01

    Sales figures and recollections of psychologists indicate textbooks play a central role in psychology students' education, yet instructors typically must select texts under time pressure and with incomplete information. Although selection aids are available, none adequately address the accuracy of texts. We describe a technique for sampling…

  12. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  13. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  14. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  15. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  16. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  17. Spatial distribution of soil heavy metal pollution estimated by different interpolation methods: accuracy and uncertainty analysis.

    PubMed

    Xie, Yunfeng; Chen, Tong-bin; Lei, Mei; Yang, Jun; Guo, Qing-jun; Song, Bo; Zhou, Xiao-yong

    2011-01-01

    Mapping the spatial distribution of contaminants in soils is the basis of pollution evaluation and risk control. Interpolation methods are extensively applied in the mapping processes to estimate the heavy metal concentrations at unsampled sites. The performances of interpolation methods (inverse distance weighting, local polynomial, ordinary kriging and radial basis functions) were assessed and compared using the root mean square error for cross validation. The results indicated that all interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals. However, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods. The difference in contaminated area estimation among the four methods reached 6.14%. According to the interpolation results, the spatial uncertainty of polluted areas was mainly located in three types of region: (a) the local maxima concentration region surrounded by low concentration (clean) sites, (b) the local minima concentration region surrounded with highly polluted samples; and (c) the boundaries of the contaminated areas. PMID:20970158

  18. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  19. Genomic Prediction in Pea: Effect of Marker Density and Training Population Size and Composition on Prediction Accuracy

    PubMed Central

    Tayeh, Nadim; Klein, Anthony; Le Paslier, Marie-Christine; Jacquin, Françoise; Houtin, Hervé; Rond, Céline; Chabert-Martinello, Marianne; Magnin-Robert, Jean-Bernard; Marget, Pascal; Aubert, Grégoire; Burstin, Judith

    2015-01-01

    Pea is an important food and feed crop and a valuable component of low-input farming systems. Improving resistance to biotic and abiotic stresses is a major breeding target to enhance yield potential and regularity. Genomic selection (GS) has lately emerged as a promising technique to increase the accuracy and gain of marker-based selection. It uses genome-wide molecular marker data to predict the breeding values of candidate lines to selection. A collection of 339 genetic resource accessions (CRB339) was subjected to high-density genotyping using the GenoPea 13.2K SNP Array. Genomic prediction accuracy was evaluated for thousand seed weight (TSW), the number of seeds per plant (NSeed), and the date of flowering (BegFlo). Mean cross-environment prediction accuracies reached 0.83 for TSW, 0.68 for NSeed, and 0.65 for BegFlo. For each trait, the statistical method, the marker density, and/or the training population size and composition used for prediction were varied to investigate their effects on prediction accuracy: the effect was large for the size and composition of the training population but limited for the statistical method and marker density. Maximizing the relatedness between individuals in the training and test sets, through the CDmean-based method, significantly improved prediction accuracies. A cross-population cross-validation experiment was further conducted using the CRB339 collection as a training population set and nine recombinant inbred lines populations as test set. Prediction quality was high with mean Q2 of 0.44 for TSW and 0.59 for BegFlo. Results are discussed in the light of current efforts to develop GS strategies in pea. PMID:26635819

  20. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  1. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  2. Accuracy of numerically produced compensators.

    PubMed

    Thompson, H; Evans, M D; Fallone, B G

    1999-01-01

    A feasibility study is performed to assess the utility of a computer numerically controlled (CNC) mill to produce compensating filters for conventional clinical use and for the delivery of intensity-modulated beams. A computer aided machining (CAM) software is used to assist in the design and construction of such filters. Geometric measurements of stepped and wedged surfaces are made to examine the accuracy of surface milling. Molds are milled and filled with molten alloy to produce filters, and both the molds and filters are examined for consistency and accuracy. Results show that the deviation of the filter surfaces from design does not exceed 1.5%. The effective attenuation coefficient is measured for CadFree, a cadmium-free alloy, in a 6 MV photon beam. The effective attenuation coefficients at the depth of maximum dose (1.5 cm) and at 10 cm in solid water phantom are found to be 0.546 cm-1 and 0.522 cm-1, respectively. Further attenuation measurements are made with Cerrobend to assess the variations of the effective attenuation coefficient with field size and source-surface distance. The ability of the CNC mill to accurately produce surfaces is verified with dose profile measurements in a 6 MV photon beam. The test phantom is composed of a 10 degrees polystyrene wedge and a 30 degrees polystyrene wedge, presenting both a sharp discontinuity and sloped surfaces. Dose profiles, measured at the depth of compensation (10 cm) beneath the test phantom and beneath a flat phantom, are compared to those produced by a commercial treatment planning system. Agreement between measured and predicted profiles is within 2%, indicating the viability of the system for filter production. PMID:10100166

  3. Comparison of Genomic Selection Models to Predict Flowering Time and Spike Grain Number in Two Hexaploid Wheat Doubled Haploid Populations.

    PubMed

    Thavamanikumar, Saravanan; Dolferus, Rudy; Thumma, Bala R

    2015-10-01

    Genomic selection (GS) is becoming an important selection tool in crop breeding. In this study, we compared the ability of different GS models to predict time to young microspore (TYM), a flowering time-related trait, spike grain number under control conditions (SGNC) and spike grain number under osmotic stress conditions (SGNO) in two wheat biparental doubled haploid populations with unrelated parents. Prediction accuracies were compared using BayesB, Bayesian least absolute shrinkage and selection operator (Bayesian LASSO / BL), ridge regression best linear unbiased prediction (RR-BLUP), partial least square regression (PLS), and sparse partial least square regression (SPLS) models. Prediction accuracy was tested with 10-fold cross-validation within a population and with independent validation in which marker effects from one population were used to predict traits in the other population. High prediction accuracies were obtained for TYM (0.51-0.84), whereas moderate to low accuracies were observed for SGNC (0.10-0.42) and SGNO (0.27-0.46) using cross-validation. Prediction accuracies based on independent validation are generally lower than those based on cross-validation. BayesB and SPLS outperformed all other models in predicting TYM with both cross-validation and independent validation. Although the accuracies of all models are similar in predicting SGNC and SGNO with cross-validation, BayesB and SPLS had the highest accuracy in predicting SGNC with independent validation. In independent validation, accuracies of all the models increased by using only the QTL-linked markers. Results from this study indicate that BayesB and SPLS capture the linkage disequilibrium between markers and traits effectively leading to higher accuracies. Excluding markers from QTL studies reduces prediction accuracies. PMID:26206349

  4. Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.

    PubMed

    Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D

    2016-07-01

    Zebu () cattle, mostly of the Nellore breed, comprise more than 80% of the beef cattle in Brazil, given their tolerance of the tropical climate and high resistance to ectoparasites. Despite their advantages for production in tropical environments, zebu cattle tend to produce tougher meat than Bos taurus breeds. Traditional genetic selection to improve meat tenderness is constrained by the difficulty and cost of phenotypic evaluation for meat quality. Therefore, genomic selection may be the best strategy to improve meat quality traits. This study was performed to compare the accuracies of different Bayesian regression models in predicting molecular breeding values for meat tenderness in Polled Nellore cattle. The data set was composed of Warner-Bratzler shear force (WBSF) of longissimus muscle from 205, 141, and 81 animals slaughtered in 2005, 2010, and 2012, respectively, which were selected and mated so as to create extreme segregation for WBSF. The animals were genotyped with either the Illumina BovineHD (HD; 777,000 from 90 samples) chip or the GeneSeek Genomic Profiler (GGP Indicus HD; 77,000 from 337 samples). The quality controls of SNP were Hard-Weinberg Proportion -value ≥ 0.1%, minor allele frequency > 1%, and call rate > 90%. The FImpute program was used for imputation from the GGP Indicus HD chip to the HD chip. The effect of each SNP was estimated using ridge regression, least absolute shrinkage and selection operator (LASSO), Bayes A, Bayes B, and Bayes Cπ methods. Different numbers of SNP were used, with 1, 2, 3, 4, 5, 7, 10, 20, 40, 60, 80, or 100% of the markers preselected based on their significance test (-value from genomewide association studies [GWAS]) or randomly sampled. The prediction accuracy was assessed by the correlation between genomic breeding value and the observed WBSF phenotype, using a leave-one-out cross-validation methodology. The prediction accuracies using all markers were all very similar for all models, ranging from 0

  5. Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.

    2009-12-01

    Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The

  6. Geostatistical radar-raingauge merging: A novel method for the quantification of rain estimation accuracy

    NASA Astrophysics Data System (ADS)

    Delrieu, Guy; Wijbrans, Annette; Boudevillain, Brice; Faure, Dominique; Bonnifait, Laurent; Kirstetter, Pierre-Emmanuel

    2014-09-01

    Compared to other estimation techniques, one advantage of geostatistical techniques is that they provide an index of the estimation accuracy of the variable of interest with the kriging estimation standard deviation (ESD). In the context of radar-raingauge quantitative precipitation estimation (QPE), we address in this article the question of how the kriging ESD can be transformed into a local spread of error by using the dependency of radar errors to the rain amount analyzed in previous work. The proposed approach is implemented for the most significant rain events observed in 2008 in the Cévennes-Vivarais region, France, by considering both the kriging with external drift (KED) and the ordinary kriging (OK) methods. A two-step procedure is implemented for estimating the rain estimation accuracy: (i) first kriging normalized ESDs are computed by using normalized variograms (sill equal to 1) to account for the observation system configuration and the spatial structure of the variable of interest (rainfall amount, residuals to the drift); (ii) based on the assumption of a linear relationship between the standard deviation and the mean of the variable of interest, a denormalization of the kriging ESDs is performed globally for a given rain event by using a cross-validation procedure. Despite the fact that the KED normalized ESDs are usually greater than the OK ones (due to an additional constraint in the kriging system and a weaker spatial structure of the residuals to the drift), the KED denormalized ESDs are generally smaller the OK ones, a result consistent with the better performance observed for the KED technique. The evolution of the mean and the standard deviation of the rainfall-scaled ESDs over a range of spatial (5-300 km2) and temporal (1-6 h) scales demonstrates that there is clear added value of the radar with respect to the raingauge network for the shortest scales, which are those of interest for flash-flood prediction in the considered region.

  7. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  8. Preschoolers Monitor the Relative Accuracy of Informants

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Koenig, Melissa; Harris, Paul L.

    2007-01-01

    In 2 studies, the sensitivity of 3- and 4-year-olds to the previous accuracy of informants was assessed. Children viewed films in which 2 informants labeled familiar objects with differential accuracy (across the 2 experiments, children were exposed to the following rates of accuracy by the more and less accurate informants, respectively: 100% vs.…

  9. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  10. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  11. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  12. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  13. Phase segmentation of X-ray computer tomography rock images using machine learning techniques: an accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo

    2016-07-01

    Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.

  14. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  15. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  16. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  17. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  18. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  19. A New Regional 3-D Velocity Model of the India-Pakistan Region for Improved Event Location Accuracy

    NASA Astrophysics Data System (ADS)

    Reiter, D.; Vincent, C.; Johnson, M.

    2001-05-01

    A 3-D velocity model for the crust and upper mantle (WINPAK3D) has been developed to improve regional event location in the India-Pakistan region. Results of extensive testing demonstrate that the model improves location accuracy for this region, specifically for the case of small regionally recorded events, for which teleseismic data may not be available. The model was developed by integrating the results of more than sixty previous studies related to crustal velocity structure in the region. We evaluated the validity of the 3-D model using the following methods: (1) cross validation analysis for a variety of events, (2) comparison of model determined hypocenters with known event location, and (3) comparison of model-derived and empirically-derived source-specific station corrections (SSSC) generated for the International Monitoring System (IMS) auxiliary seismic station located at Nilore. The 3-D model provides significant improvement in regional location compared to both global and regional 1-D models in this area of complex structural variability. For example, the epicenter mislocation for an event with a well known location was only 6.4 km using the 3-D model, compared with a mislocation of 13.0 km using an average regional 1-D model and 15.1 km for the IASPEI91 model. We will present these and other results to demonstrate that 3-D velocity models are essential to improving event location accuracy in regions with complicated crustal geology and structures. Such 3-D models will be a prerequisite for achieving improved location accuracies for regions of high monitoring interest.

  20. Accuracy analysis of distributed simulation systems

    NASA Astrophysics Data System (ADS)

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  1. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  2. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  3. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  4. The Accuracy of Gender Stereotypes Regarding Occupations.

    ERIC Educational Resources Information Center

    Beyer, Sylvia; Finnegan, Andrea

    Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…

  5. Individual Differences in Eyewitness Recall Accuracy.

    ERIC Educational Resources Information Center

    Berger, James D.; Herringer, Lawrence G.

    1991-01-01

    Presents study results comparing college students' self-evaluation of recall accuracy to actual recall of detail after viewing a crime scenario. Reports that self-reported ability to remember detail correlates with accuracy in memory of specifics. Concludes that people may have a good indication early in the eyewitness situation of whether they…

  6. Scientific Sources' Perception of Network News Accuracy.

    ERIC Educational Resources Information Center

    Moore, Barbara; Singletary, Michael

    Recent polls seem to indicate that many Americans rely on television as a credible and primary source of news. To test the accuracy of this news, a study examined three networks' newscasts of science news, the attitudes of the science sources toward reporting in their field, and the factors related to accuracy. The Vanderbilt News Archives Index…

  7. Accuracy of Carbohydrate Counting in Adults.

    PubMed

    Meade, Lisa T; Rushton, Wanda E

    2016-07-01

    In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531

  8. Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method

    NASA Astrophysics Data System (ADS)

    Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio

    Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV

  9. Optimizing the geometrical accuracy of curvilinear meshes

    NASA Astrophysics Data System (ADS)

    Toulorge, Thomas; Lambrechts, Jonathan; Remacle, Jean-François

    2016-04-01

    This paper presents a method to generate valid high order meshes with optimized geometrical accuracy. The high order meshing procedure starts with a linear mesh, that is subsequently curved without taking care of the validity of the high order elements. An optimization procedure is then used to both untangle invalid elements and optimize the geometrical accuracy of the mesh. Standard measures of the distance between curves are considered to evaluate the geometrical accuracy in planar two-dimensional meshes, but they prove computationally too costly for optimization purposes. A fast estimate of the geometrical accuracy, based on Taylor expansions of the curves, is introduced. An unconstrained optimization procedure based on this estimate is shown to yield significant improvements in the geometrical accuracy of high order meshes, as measured by the standard Hausdorff distance between the geometrical model and the mesh. Several examples illustrate the beneficial impact of this method on CFD solutions, with a particular role of the enhanced mesh boundary smoothness.

  10. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  11. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  12. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  13. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  14. Geometric accuracy in airborne SAR images

    NASA Technical Reports Server (NTRS)

    Blacknell, D.; Quegan, S.; Ward, I. A.; Freeman, A.; Finley, I. P.

    1989-01-01

    Uncorrected across-track motions of a synthetic aperture radar (SAR) platform can cause both a severe loss of azimuthal positioning accuracy in, and defocusing of, the resultant SAR image. It is shown how the results of an autofocus procedure can be incorporated in the azimuth processing to produce a fully focused image that is geometrically accurate in azimuth. Range positioning accuracy is also discussed, leading to a comprehensive treatment of all aspects of geometric accuracy. The system considered is an X-band SAR.

  15. High accuracy calibration of the fiber spectroradiometer

    NASA Astrophysics Data System (ADS)

    Wu, Zhifeng; Dai, Caihong; Wang, Yanfei; Chen, Binhua

    2014-11-01

    Comparing to the big-size scanning spectroradiometer, the compact and convenient fiber spectroradiometer is widely used in various kinds of fields, such as the remote sensing, aerospace monitoring, and solar irradiance measurement. High accuracy calibration should be made before the use, which involves the wavelength accuracy, the background environment noise, the nonlinear effect, the bandwidth, the stray light and et al. The wavelength lamp and tungsten lamp are frequently used to calibration the fiber spectroradiometer. The wavelength difference can be easily reduced through the software or calculation. However, the nonlinear effect and the bandwidth always can affect the measurement accuracy significantly.

  16. Accuracy and consistency of modern elastomeric pumps.

    PubMed

    Weisman, Robyn S; Missair, Andres; Pham, Phung; Gutierrez, Juan F; Gebhard, Ralf E

    2014-01-01

    Continuous peripheral nerve blockade has become a popular method of achieving postoperative analgesia for many surgical procedures. The safety and reliability of infusion pumps are dependent on their flow rate accuracy and consistency. Knowledge of pump rate profiles can help physicians determine which infusion pump is best suited for their clinical applications and specific patient population. Several studies have investigated the accuracy of portable infusion pumps. Using methodology similar to that used by Ilfeld et al, we investigated the accuracy and consistency of several current elastomeric pumps. PMID:25140510

  17. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  18. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  19. Accuracy analysis of automatic distortion correction

    NASA Astrophysics Data System (ADS)

    Kolecki, Jakub; Rzonca, Antoni

    2015-06-01

    The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested.

  20. Empathic Embarrassment Accuracy in Autism Spectrum Disorder.

    PubMed

    Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G

    2015-06-01

    Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. PMID:25732043

  1. Coding accuracy on the psychophysical scale

    PubMed Central

    Kostal, Lubomir; Lansky, Petr

    2016-01-01

    Sensory neurons are often reported to adjust their coding accuracy to the stimulus statistics. The observed match is not always perfect and the maximal accuracy does not align with the most frequent stimuli. As an alternative to a physiological explanation we show that the match critically depends on the chosen stimulus measurement scale. More generally, we argue that if we measure the stimulus intensity on the scale which is proportional to the perception intensity, an improved adjustment in the coding accuracy is revealed. The unique feature of stimulus units based on the psychophysical scale is that the coding accuracy can be meaningfully compared for different stimuli intensities, unlike in the standard case of a metric scale. PMID:27021783

  2. Measuring the Accuracy of Diagnostic Systems.

    ERIC Educational Resources Information Center

    Swets, John A.

    1988-01-01

    Discusses the relative operating characteristic analysis of signal detection theory as a measure of diagnostic accuracy. Reports representative values of this measure in several fields. Compares how problems in these fields are handled. (CW)

  3. Sun-pointing programs and their accuracy

    SciTech Connect

    Zimmerman, J.C.

    1981-05-01

    Several sun-pointing programs and their accuracy are described. FORTRAN program listings are given. Program descriptions are given for both Hewlett-Packard (HP-67) and Texas Instruments (TI-59) hand-held calculators.

  4. Nonverbal self-accuracy in interpersonal interaction.

    PubMed

    Hall, Judith A; Murphy, Nora A; Mast, Marianne Schmid

    2007-12-01

    Four studies measure participants' accuracy in remembering, without forewarning, their own nonverbal behavior after an interpersonal interaction. Self-accuracy for smiling, nodding, gazing, hand gesturing, and self-touching is scored by comparing the participants' recollections with coding based on videotape. Self-accuracy is above chance and of modest magnitude on average. Self-accuracy is greatest for smiling; intermediate for nodding, gazing, and gesturing; and lowest for self-touching. It is higher when participants focus attention away from the self (learning as much as possible about the partner, rearranging the furniture in the room, evaluating the partner, smiling and gazing at the partner) than when participants are more self-focused (getting acquainted, trying to make a good impression on the partner, being evaluated by the partner, engaging in more self-touching). The contributions of cognitive demand and affective state are discussed. PMID:18000102

  5. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  6. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  7. Effect of species rarity on the accuracy of species distribution models for reptiles and amphibians in southern California

    USGS Publications Warehouse

    Franklin, J.; Wejnert, K.E.; Hathaway, S.A.; Rochester, C.J.; Fisher, R.N.

    2009-01-01

    Aim: Several studies have found that more accurate predictive models of species' occurrences can be developed for rarer species; however, one recent study found the relationship between range size and model performance to be an artefact of sample prevalence, that is, the proportion of presence versus absence observations in the data used to train the model. We examined the effect of model type, species rarity class, species' survey frequency, detectability and manipulated sample prevalence on the accuracy of distribution models developed for 30 reptile and amphibian species. Location: Coastal southern California, USA. Methods: Classification trees, generalized additive models and generalized linear models were developed using species presence and absence data from 420 locations. Model performance was measured using sensitivity, specificity and the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot based on twofold cross-validation, or on bootstrapping. Predictors included climate, terrain, soil and vegetation variables. Species were assigned to rarity classes by experts. The data were sampled to generate subsets with varying ratios of presences and absences to test for the effect of sample prevalence. Join count statistics were used to characterize spatial dependence in the prediction errors. Results: Species in classes with higher rarity were more accurately predicted than common species, and this effect was independent of sample prevalence. Although positive spatial autocorrelation remained in the prediction errors, it was weaker than was observed in the species occurrence data. The differences in accuracy among model types were slight. Main conclusions: Using a variety of modelling methods, more accurate species distribution models were developed for rarer than for more common species. This was presumably because it is difficult to discriminate suitable from unsuitable habitat for habitat generalists, and not as an artefact of the

  8. A Multiscale Decomposition Approach to Detect Abnormal Vasculature in the Optic Disc

    PubMed Central

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S.; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-01-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. PMID:25698545

  9. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore, preliminary analyses should be carried out under the control and authority of an established international professional Organization or Association, before any final political decision is made by ISO to select a specific Environmental Models, like for example IGRF and DGRF. Of course, Commissions responsible for checking the consistency of definitions, methods and algorithms for data processing might consider to delegate specific tasks (e.g. bench-marking the technical tools, the calibration procedures, the methods of data analysis, and the software algorithms employed in building the different types of models, as well as their usage) to private, intergovernmental or international organization/agencies (e.g.: NASA, ESA, AGU, EGU, COSPAR, . . . ); eventually, the latter should report conclusions to the Commissions members appointed by IAGA or any established authority like IUGG.

  10. Backward Variable Elimination Canonical Correlation and Canonical Cross-Validation.

    ERIC Educational Resources Information Center

    Eason, Sandra

    This paper suggests that multivariate analysis techniques are very important in educational research, and that one multivariate technique--canonical correlation analysis--may be particularly useful. The logic of canonical analysis is explained. It is suggested that a backward variable elimination strategy can make the method even more powerful, by…

  11. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  12. Cross-Validation of the Computerized Adaptive Screening Test (CAST).

    ERIC Educational Resources Information Center

    Pliske, Rebecca M.; And Others

    The Computerized Adaptive Screening Test (CAST) was developed to provide an estimate at recruiting stations of prospects' Armed Forces Qualification Test (AFQT) scores. The CAST was designed to replace the paper-and-pencil Enlistment Screening Test (EST). The initial validation study of CAST indicated that CAST predicts AFQT at least as accurately…

  13. Cross-Validation of the JSORRAT-II in Iowa.

    PubMed

    Ralston, Christopher A; Epperson, Douglas L; Edwards, Sarah R

    2016-09-01

    The predictive validity of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II) was evaluated using an exhaustive sample of 11- to 17-year-old male juveniles who offended sexually (JSOs) between 2000 and 2006 in Iowa (n = 529). The validity of the tool in predicting juvenile sexual recidivism was significant (area under the receiver operating characteristic curve [AUC] = .70, 99% confidence interval [CI] = [.60, .81], d = 0.70). Non-significant predictive validity coefficients were observed for the prediction of non-sexual forms of recidivism. Additional analyses were undertaken to test hypotheses about the tool's performance with various subsamples. The age of the JSO at the time of the index sexual offense and time at risk outside secure facility placements interacted significantly with JSORRAT-II scores to predict juvenile sexual recidivism. The implications of these findings for practice and research on the validation of risk assessment tools are discussed. PMID:25179400

  14. [Cross validity of the UCLA Loneliness Scale factorization].

    PubMed

    Borges, Africa; Prieto, Pedro; Ricchetti, Giacinto; Hernández-Jorge, Carmen; Rodríguez-Naveiras, Elena

    2008-11-01

    Loneliness is an unpleasant experience that takes place when a person's network of social relationships is significantly deficient in quality and quantity, and it is associated with negative feelings. Loneliness is a fundamental construct that provides information about several psychological processes, especially in the clinical setting. It is well known that this construct is related to isolation and emotional loneliness. One of the most well-known psychometric instruments to measure loneliness is the revised UCLA Loneliness Scale, which has been factorized in several populations. A controversial issue related to the UCLA Loneliness Scale is its factor structure, because the test was first created based on a unidimensional structure; however, subsequent research has proved that its structure may be bipolar or even multidimensional. In the present work, the UCLA Loneliness Scale was completed by two populations: Spanish and Italian undergraduate university students. Results show a multifactorial structure in both samples. This research presents a theoretically and analytically coherent bifactorial structure. PMID:18940104

  15. Short communication: Selecting the most informative mid-infrared spectra wavenumbers to improve the accuracy of prediction models for detailed milk protein content.

    PubMed

    Niero, G; Penasa, M; Gottardo, P; Cassandro, M; De Marchi, M

    2016-03-01

    The objective of this study was to investigate the ability of mid-infrared spectroscopy (MIRS) to predict protein fraction contents of bovine milk samples by applying uninformative variable elimination (UVE) procedure to select the most informative wavenumber variables before partial least squares (PLS) analysis. Reference values (n=114) of protein fractions were measured using reversed-phase HPLC and spectra were acquired through MilkoScan FT6000 (Foss Electric A/S, Hillerød, Denmark). Prediction models were built using the full data set and tested with a leave-one-out cross-validation. Compared with MIRS models developed using standard PLS, the UVE procedure reduced the number of wavenumber variables to be analyzed through PLS regression and improved the accuracy of prediction by 6.0 to 66.7%. Good predictions were obtained for total protein, total casein (CN), and α-CN, which included αS1- and αS2-CN; moderately accurate predictions were observed for κ-CN and total whey protein; and unsatisfactory results were obtained for β-CN, α-lactalbumin, and β-lactoglobulin. Results indicated that UVE combined with PLS is a valid approach to enhance the accuracy of MIRS prediction models for milk protein fractions. PMID:26774721

  16. A strategy for multivariate calibration based on modified single-index signal regression: Capturing explicit non-linearity and improving prediction accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun

    2013-11-01

    In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.

  17. Accuracy of polyp localization at colonoscopy

    PubMed Central

    O’Connor, Sam A.; Hewett, David G.; Watson, Marcus O.; Kendall, Bradley J.; Hourigan, Luke F.; Holtmann, Gerald

    2016-01-01

    Background and study aims: Accurate documentation of lesion localization at the time of colonoscopic polypectomy is important for future surveillance, management of complications such as delayed bleeding, and for guiding surgical resection. We aimed to assess the accuracy of endoscopic localization of polyps during colonoscopy and examine variables that may influence this accuracy. Patients and methods: We conducted a prospective observational study in consecutive patients presenting for elective, outpatient colonoscopy. All procedures were performed by Australian certified colonoscopists. The endoscopic location of each polyp was reported by the colonoscopist at the time of resection and prospectively recorded. Magnetic endoscope imaging was used to determine polyp location, and colonoscopists were blinded to this image. Three experienced colonoscopists, blinded to the endoscopist’s assessment of polyp location, independently scored the magnetic endoscope images to obtain a reference standard for polyp location (Cronbach alpha 0.98). The accuracy of colonoscopist polyp localization using this reference standard was assessed, and colonoscopist, procedural and patient variables affecting accuracy were evaluated. Results: A total of 155 patients were enrolled and 282 polyps were resected in 95 patients by 14 colonoscopists. The overall accuracy of polyp localization was 85 % (95 % confidence interval, CI; 60 – 96 %). Accuracy varied significantly (P < 0.001) by colonic segment: caecum 100 %, ascending 77 % (CI;65 – 90), transverse 84 % (CI;75 – 92), descending 56 % (CI;32 – 81), sigmoid 88 % (CI;79 – 97), rectum 96 % (CI;90 – 101). There were significant differences in accuracy between colonoscopists (P < 0.001), and colonoscopist experience was a significant independent predictor of accuracy (OR 3.5, P = 0.028) after adjustment for patient and procedural variables. Conclusions: Accuracy of

  18. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  19. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  20. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  1. Activity monitor accuracy in persons using canes.

    PubMed

    Wendland, Deborah Michael; Sprigle, Stephen H

    2012-01-01

    The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6) who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001) when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use. PMID:23341318

  2. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  3. Decreased interoceptive accuracy following social exclusion.

    PubMed

    Durlik, Caroline; Tsakiris, Manos

    2015-04-01

    The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592

  4. Affecting speed and accuracy in perception.

    PubMed

    Bocanegra, Bruno R

    2014-12-01

    An account of affective modulations in perceptual speed and accuracy (ASAP: Affecting Speed and Accuracy in Perception) is proposed and tested. This account assumes an emotion-induced inhibitory interaction between parallel channels in the visual system that modulates the onset latencies and response durations of visual signals. By trading off speed and accuracy between channels, this mechanism achieves (a) fast visuo-motor responding to course-grained information, and (b) accurate visuo-attentional selection of fine-grained information. ASAP gives a functional account of previously counterintuitive findings, and may be useful for explaining affective influences in both featural-level single-stimulus tasks and object-level multistimulus tasks. PMID:24853268

  5. Training in timing improves accuracy in golf.

    PubMed

    Libkuman, Terry M; Otani, Hajime; Steger, Neil

    2002-01-01

    In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497

  6. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  7. The accuracy of Halley's cometary orbits

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    The accuracy of a scientific computation depends in the main on the data fed in and the analysis method used. This statement is certainly true of Edmond Halley's cometary orbit work. Considering the 420 comets that had been seen before Halley's era of orbital calculation (1695 - 1702) only 24, according to him, had been observed well enough for their orbits to be calculated. Two questions are considered in this paper. Do all the orbits listed by Halley have the same accuracy? and, secondly, how accurate was Halley's method of calculation?

  8. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  9. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  10. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  11. High Accuracy Transistor Compact Model Calibrations

    SciTech Connect

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  12. High accuracy in short ISS missions

    NASA Astrophysics Data System (ADS)

    Rüeger, J. M.

    1986-06-01

    Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.

  13. Direct Behavior Rating: Considerations for Rater Accuracy

    ERIC Educational Resources Information Center

    Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.

    2014-01-01

    Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…

  14. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  15. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  16. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  17. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  18. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  19. Accuracy of Information Processing under Focused Attention.

    ERIC Educational Resources Information Center

    Bastick, Tony

    This paper reports the results of an experiment on the accuracy of information processing during attention focused arousal under two conditions: single estimation and double estimation. The attention of 187 college students was focused by a task requiring high level competition for a monetary prize ($10) under severely limited time conditions. The…

  20. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  1. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  2. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  3. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  4. Measuring Tracking Accuracy of CCD Imagers

    NASA Technical Reports Server (NTRS)

    Stanton, R. H.; Dennison, E. W.

    1985-01-01

    Tracking accuracy and resolution of charge-coupled device (CCD) imaging arrays measured by instrument originally developed for measuring performance of star-tracking telescope. Operates by projecting one or more artifical star images on surface of CCD array, moving stars in controlled patterns, and comparing star locations computed from CCD outputs with those calculated from step coordinates of micropositioner.

  5. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  6. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  7. The impact of accuracy motivation on interpretation, comparison, and correction processes: accuracy x knowledge accessibility effects.

    PubMed

    Stapel, D A; Koomen, W; Zeelenberg, M

    1998-04-01

    Four studies provide evidence for the notion that there may be boundaries to the extent to which accuracy motivation may help perceivers to escape the influence of fortuitously activated information. Specifically, although accuracy motivations may eliminate assimilative accessibility effects, they are less likely to eliminate contrastive accessibility effects. It was found that the occurrence of different types of contrast effects (comparison and correction) was not significantly affected by participants' accuracy motivations. Furthermore, it was found that the mechanisms instigated by accuracy motivations differ from those ignited by correction instructions: Accuracy motivations attenuate assimilation effects because perceivers add target interpretations to the one suggested by primed information. Conversely, it was found that correction instructions yield contrast and prompt respondents to remove the priming event's influence from their reaction to the target. PMID:9569650

  8. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  9. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  10. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  11. Sub part-per-million mass accuracy by using stepwise-external calibration in fourier transform ion cyclotron resonance mass spectrometry.

    PubMed

    Wong, Richard L; Amster, I Jonathan

    2006-12-01

    A new external calibration procedure for FT-ICR mass spectrometry is presented, stepwise-external calibration. This method is demonstrated for MALDI analysis of peptide mixtures, but is applicable to any ionization method. For this procedure, the masses of analyte peaks are first accurately measured at a low trapping potential (0.63 V) using external calibration. These accurately determined (< 1 ppm accuracy) analyte peaks are used as internal calibrant points for a second mass spectrum that is acquired for the same sample at a higher trapping potential (1.0 V). The second mass spectrum has a approximately 10-fold improvement in detection dynamic range compared with the first spectrum acquired at a low trapping potential. A calibration equation that accounts for local and global space charge is shown to provide mass accuracy with external calibration that is nearly identical to that of internal calibration, without the drawbacks of experimental complexity or reduction of abundance dynamic range. For the 609 mass peaks measured using stepwise-external calibration method, the root-mean-square error is 0.9 ppm. The errors appear to have a Gaussian distribution; 99.3% of the mass errors are shown to lie within three times the sample standard deviation (2.6 ppm) of their true value. PMID:16934995

  12. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  13. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  14. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  15. Do saccharide doped PAGAT dosimeters increase accuracy?

    NASA Astrophysics Data System (ADS)

    Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.

    2015-01-01

    To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.

  16. Accuracy of forecasts in strategic intelligence

    PubMed Central

    Mandel, David R.; Barnes, Alan

    2014-01-01

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176

  17. Accuracy of NHANES periodontal examination protocols.

    PubMed

    Eke, P I; Thornton-Evans, G O; Wei, L; Borgnakke, W S; Dye, B A

    2010-11-01

    This study evaluates the accuracy of periodontitis prevalence determined by the National Health and Nutrition Examination Survey (NHANES) partial-mouth periodontal examination protocols. True periodontitis prevalence was determined in a new convenience sample of 454 adults ≥ 35 years old, by a full-mouth "gold standard" periodontal examination. This actual prevalence was compared with prevalence resulting from analysis of the data according to the protocols of NHANES III and NHANES 2001-2004, respectively. Both NHANES protocols substantially underestimated the prevalence of periodontitis by 50% or more, depending on the periodontitis case definition used, and thus performed below threshold levels for moderate-to-high levels of validity for surveillance. Adding measurements from lingual or interproximal sites to the NHANES 2001-2004 protocol did not improve the accuracy sufficiently to reach acceptable sensitivity thresholds. These findings suggest that NHANES protocols produce high levels of misclassification of periodontitis cases and thus have low validity for surveillance and research. PMID:20858782

  18. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  19. Marginal accuracy of temporary composite crowns.

    PubMed

    Tjan, A H; Tjan, A H; Grant, B E

    1987-10-01

    An in vitro study was conducted to quantitatively compare the marginal adaptation of temporary crowns made from Protemp material with those made from Scutan, Provisional, and Trim materials. A direct technique was used to make temporary restorations on prepared teeth with an impression as a matrix. Protem, Trim, and Provisional materials produced temporary crowns of comparable accuracy. Crowns made from Scutan material had open margins. PMID:2959770

  20. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  1. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  2. Gravitational model effects on ICBM accuracy

    NASA Astrophysics Data System (ADS)

    Ford, C. T.

    This paper describes methods used to assess the contribution of ICBM gravitational model errors to targeting accuracy. The evolution of gravitational model complexity, in both format and data base development, is summarized. Error analysis methods associated with six identified error sources are presented: geodetic coordinate errors; spherical harmonic potential function errors of commission and omission; and surface gravity anomaly errors of reduction, representation, and omission.

  3. Accuracy of pointing a binaural listening array.

    PubMed

    Letowski, T R; Ricard, G L; Kalb, J T; Mermagen, T J; Amrein, K M

    1997-12-01

    We measured the accuracy with which sounds heard over a binaural, end-fire array could be located when the angular separation of the array's two arms was varied. Each individual arm contained nine cardioid electret microphones, the responses of which were combined to produce a unidirectional, band-limited pattern of sensitivity. We assessed the desirable angular separation of these arms by measuring the accuracy with which listeners could point to the source of a target sound presented against high-level background noise. We employed array separations of 30 degrees, 45 degrees, and 60 degrees, and signal-to-noise ratios of +5, -5, and -15 dB. Pointing accuracy was best for a separation of 60 degrees; this performance was indistinguishable from pointing during unaided listening conditions. In addition, the processing of the array was modeled to depict the information that was available for localization. The model indicates that highly directional binaural arrays can be expected to support accurate localization of sources of sound only near the axis of the array. Wider enhanced listening angles may be possible if the forward coverage of the sensor system is made less directional and more similar to that of human listeners. PMID:9473975

  4. Accuracy test procedure for image evaluation techniques.

    PubMed

    Jones, R A

    1968-01-01

    A procedure has been developed to determine the accuracy of image evaluation techniques. In the procedure, a target having orthogonal test arrays is photographed with a high quality optical system. During the exposure, the target is subjected to horizontal linear image motion. The modulation transfer functions of the images in the horizontal and vertical directions are obtained using the evaluation technique. Since all other degradations are symmetrical, the quotient of the two modulation transfer functions represents the modulation transfer function of the experimentally induced linear image motion. In an accurate experiment, any discrepancy between the experimental determination and the true value is due to inaccuracy in the image evaluation technique. The procedure was used to test the Perkin-Elmer automated edge gradient analysis technique over the spatial frequency range of 0-200 c/m. This experiment demonstrated that the edge gradient technique is accurate over this region and that the testing procedure can be controlled with the desired accuracy. Similarly, the test procedure can be used to determine the accuracy of other image evaluation techniques. PMID:20062421

  5. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  6. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  7. A Family of Rater Accuracy Models.

    PubMed

    Wolfe, Edward W; Jiao, Hong; Song, Tian

    2015-01-01

    Engelhard (1996) proposed a rater accuracy model (RAM) as a means of evaluating rater accuracy in rating data, but very little research exists to determine the efficacy of that model. The RAM requires a transformation of the raw score data to accuracy measures by comparing rater-assigned scores to true scores. Indices computed based on raw scores also exist for measuring rater effects, but these indices ignore deviations of rater-assigned scores from true scores. This paper demonstrates the efficacy of two versions of the RAM (based on dichotomized and polytomized deviations of rater-assigned scores from true scores) to two versions of raw score rater effect models (i.e., a Rasch partial credit model, PCM, and a Rasch rating scale model, RSM). Simulated data are used to demonstrate the efficacy with which these four models detect and differentiate three rater effects: severity, centrality, and inaccuracy. Results indicate that the RAMs are able to detect, but not differentiate, rater severity and inaccuracy, but not rater centrality. The PCM and RSM, on the other hand, are able to both detect and differentiate all three of these rater effects. However, the RSM and PCM do not take into account true scores and may, therefore, be misleading when pervasive trends exist in the rater-assigned data. PMID:26075664

  8. Speed/accuracy tradeoff in force perception.

    PubMed

    Rank, Markus; Di Luca, Massimiliano

    2015-06-01

    There is a well-known tradeoff between speed and accuracy in judgments made under uncertainty. Diffusion models have been proposed to capture the increase in response time for more uncertain decisions and the change in performance due to a prioritization of speed or accuracy in the responses. Experimental paradigms have been confined to the visual modality and model analysis have mostly used quantile-probability (QP) plots--response probability as a function of quantized RTs. Here, we extend diffusion modeling to haptics and test a novel type of analysis for judging model fitting. Participants classified force stimuli applied to the hand as "high" or "low." Data in QP plots indicate that the diffusion model captures well the overall pattern of responses in conditions where either speed or accuracy has been prioritized. To further the analysis, we compute just noticeable difference (JND) values separately for responses delivered with different RTs--we define these plots as JND quantile. The pattern of results evidences that slower responses lead to better force discrimination up to a plateau that is unaffected by prioritization instructions. Instead, the diffusion model predicts two well-separated plateaus depending on the condition. We propose that analyzing the relation between JNDs and response time should be considered in the evaluation of the diffusion model beyond the haptic modality, thus including vision. PMID:25867512

  9. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  10. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  11. Piezoresistive position microsensors with ppm-accuracy

    NASA Astrophysics Data System (ADS)

    Stavrov, Vladimir; Shulev, Assen; Stavreva, Galina; Todorov, Vencislav

    2015-05-01

    In this article, the relation between position accuracy and the number of simultaneously measured values, such as coordinates, has been analyzed. Based on this, a conceptual layout of MEMS devices (microsensors) for multidimensional position monitoring comprising a single anchored and a single actuated part has been developed. Both parts are connected with a plurality of micromechanical flexures, and each flexure includes position detecting cantilevers. Microsensors having detecting cantilevers oriented in X and Y direction have been designed and prototyped. Experimentally measured results at characterization of 1D, 2D and 3D position microsensors are reported as well. Exploiting different flexure layouts, a travel range between 50μm and 1.8mm and sensors' sensitivity in the range between 30μV/μm and 5mV/μm@ 1V DC supply voltage have been demonstrated. A method for accurate calculation of all three Cartesian coordinates, based on measurement of at least three microsensors' signals has also been described. The analyses of experimental results prove the capability of position monitoring with ppm-(part per million) accuracy. The technology for fabrication of MEMS devices with sidewall embedded piezoresistors removes restrictions in strong improvement of their usability for position sensing with a high accuracy. The present study is, also a part of a common strategy for developing a novel MEMS-based platform for simultaneous accurate measurement of various physical values when they are transduced to a change of position.

  12. Speed versus accuracy in collective decision making.

    PubMed Central

    Franks, Nigel R; Dornhaus, Anna; Fitzsimmons, Jon P; Stevens, Martin

    2003-01-01

    We demonstrate a speed versus accuracy trade-off in collective decision making. House-hunting ant colonies choose a new nest more quickly in harsh conditions than in benign ones and are less discriminating. The errors that occur in a harsh environment are errors of judgement not errors of omission because the colonies have discovered all of the alternative nests before they initiate an emigration. Leptothorax albipennis ants use quorum sensing in their house hunting. They only accept a nest, and begin rapidly recruiting members of their colony, when they find within it a sufficient number of their nest-mates. Here we show that these ants can lower their quorum thresholds between benign and harsh conditions to adjust their speed-accuracy trade-off. Indeed, in harsh conditions these ants rely much more on individual decision making than collective decision making. Our findings show that these ants actively choose to take their time over judgements and employ collective decision making in benign conditions when accuracy is more important than speed. PMID:14667335

  13. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  14. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    PubMed Central

    Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  15. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    PubMed

    Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  16. Intelligence: The Speed and Accuracy Tradeoff in High Aptitude Individuals.

    ERIC Educational Resources Information Center

    Lajoie, Suzanne P.; Shore, Bruce M.

    1986-01-01

    The relative contributions of mental speed and accuracy to Primary Mental Ability (PMA) IQ prediction were studied in 52 high ability grade 10 students. Both speed and accuracy independently predicted IQ, but not speed over and above accuracy. Accuracy was demonstrated to be universally advantageous in IQ performance, but speed varied according to…

  17. PHAT: PHoto-z Accuracy Testing

    NASA Astrophysics Data System (ADS)

    Hildebrandt, H.; Arnouts, S.; Capak, P.; Moustakas, L. A.; Wolf, C.; Abdalla, F. B.; Assef, R. J.; Banerji, M.; Benítez, N.; Brammer, G. B.; Budavári, T.; Carliles, S.; Coe, D.; Dahlen, T.; Feldmann, R.; Gerdes, D.; Gillis, B.; Ilbert, O.; Kotulla, R.; Lahav, O.; Li, I. H.; Miralles, J.-M.; Purger, N.; Schmidt, S.; Singal, J.

    2010-11-01

    Context. Photometric redshifts (photo-z's) have become an essential tool in extragalactic astronomy. Many current and upcoming observing programmes require great accuracy of photo-z's to reach their scientific goals. Aims: Here we introduce PHAT, the PHoto-z Accuracy Testing programme, an international initiative to test and compare different methods of photo-z estimation. Methods: Two different test environments are set up, one (PHAT0) based on simulations to test the basic functionality of the different photo-z codes, and another one (PHAT1) based on data from the GOODS survey including 18-band photometry and ~2000 spectroscopic redshifts. Results: The accuracy of the different methods is expressed and ranked by the global photo-z bias, scatter, and outlier rates. While most methods agree very well on PHAT0 there are differences in the handling of the Lyman-α forest for higher redshifts. Furthermore, different methods produce photo-z scatters that can differ by up to a factor of two even in this idealised case. A larger spread in accuracy is found for PHAT1. Few methods benefit from the addition of mid-IR photometry. The accuracy of the other methods is unaffected or suffers when IRAC data are included. Remaining biases and systematic effects can be explained by shortcomings in the different template sets (especially in the mid-IR) and the use of priors on the one hand and an insufficient training set on the other hand. Some strategies to overcome these problems are identified by comparing the methods in detail. Scatters of 4-8% in Δz/(1+z) were obtained, consistent with other studies. However, somewhat larger outlier rates (>7.5% with Δz/(1+z)>0.15; >4.5% after cleaning) are found for all codes that can only partly be explained by AGN or issues in the photometry or the spec-z catalogue. Some outliers were probably missed in comparisons of photo-z's to other, less complete spectroscopic surveys in the past. There is a general trend that empirical codes produce

  18. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation). PMID:27386623

  19. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  20. Accuracy of velocities from repeated GPS measurements

    NASA Astrophysics Data System (ADS)

    Akarsu, V.; Sanli, D. U.; Arslan, E.

    2015-04-01

    Today repeated GPS measurements are still in use, because we cannot always employ GPS permanent stations due to a variety of limitations. One area of study that uses velocities/deformation rates from repeated GPS measurements is the monitoring of crustal motion. This paper discusses the quality of the velocities derived using repeated GPS measurements for the aim of monitoring crustal motion. From a global network of International GNSS Service (IGS) stations, we processed GPS measurements repeated monthly and annually spanning nearly 15 years and estimated GPS velocities for GPS baseline components latitude, longitude and ellipsoidal height. We used web-based GIPSY for the processing. Assuming true deformation rates can only be determined from the solutions of 24 h observation sessions, we evaluated the accuracy of the deformation rates from 8 and 12 h sessions. We used statistical hypothesis testing to assess the velocities derived from short observation sessions. In addition, as an alternative control method we checked the accuracy of GPS solutions from short observation sessions against those of 24 h sessions referring to statistical criteria that measure the accuracy of regression models. Results indicate that the velocities of the vertical component are completely affected when repeated GPS measurements are used. The results also reveal that only about 30% of the 8 h solutions and about 40% of 12 h solutions for the horizontal coordinates are acceptable for velocity estimation. The situation is much worse for the vertical component in which none of the solutions from campaign measurements are acceptable for obtaining reliable deformation rates.

  1. Accuracy of abdominal auscultation for bowel obstruction

    PubMed Central

    Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge

    2015-01-01

    AIM: To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. METHODS: Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann® Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. RESULTS: Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). CONCLUSION: Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds. PMID:26379407

  2. Accuracy requirements. [for monitoring of climate changes

    NASA Technical Reports Server (NTRS)

    Delgenio, Anthony

    1993-01-01

    Satellite and surface measurements, if they are to serve as a climate monitoring system, must be accurate enough to permit detection of changes of climate parameters on decadal time scales. The accuracy requirements are difficult to define a priori since they depend on unknown future changes of climate forcings and feedbacks. As a framework for evaluation of candidate Climsat instruments and orbits, we estimate the accuracies that would be needed to measure changes expected over two decades based on theoretical considerations including GCM simulations and on observational evidence in cases where data are available for rates of change. One major climate forcing known with reasonable accuracy is that caused by the anthropogenic homogeneously mixed greenhouse gases (CO2, CFC's, CH4 and N2O). Their net forcing since the industrial revolution began is about 2 W/sq m and it is presently increasing at a rate of about 1 W/sq m per 20 years. Thus for a competing forcing or feedback to be important, it needs to be of the order of 0.25 W/sq m or larger on this time scale. The significance of most climate feedbacks depends on their sensitivity to temperature change. Therefore we begin with an estimate of decadal temperature change. Presented are the transient temperature trends simulated by the GISS GCM when subjected to various scenarios of trace gas concentration increases. Scenario B, which represents the most plausible near-term emission rates and includes intermittent forcing by volcanic aerosols, yields a global mean surface air temperature increase Delta Ts = 0.7 degrees C over the time period 1995-2015. This is consistent with the IPCC projection of about 0.3 degrees C/decade global warming (IPCC, 1990). Several of our estimates below are based on this assumed rate of warming.

  3. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  4. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  5. High accuracy radiation efficiency measurement techniques

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.

    1981-01-01

    The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.

  6. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  7. Accuracy of the river discharge measurement

    NASA Astrophysics Data System (ADS)

    Chung Yang, Han

    2013-04-01

    Discharge values recorded for water conservancy and hydrological analysis is a very important work. Flood control projects, watershed remediation and river environmental planning projects quite need the discharge measurement data. In Taiwan, we have 129 rivers, in accordance with the watershed situation, economic development and other factors, divided into 24 major rivers, 29 minor rivers and 79 ordinary rivers. If each river needs to measure and record these discharge values, it will be enormous work. In addition, the characteristics of Taiwan's rivers contain steep slope, flow rapidly and sediment concentration higher, so it really encounters some difficulties in high flow measurement. When the flood hazards come, to seek a solution for reducing the time, manpower and material resources in river discharge measurement is very important. In this study, the river discharge measurement accuracy is used to determine the tolerance percentage to reduce the number of vertical velocity measurements, thereby reducing the time, manpower and material resources in the river discharge measurement. The velocity data sources used in this study form Yang (1998). Yang (1998) used the Fiber-optic Laser Doppler Velocimetery (FLDV) to obtain different velocity data under different experimental conditions. In this study, we use these data to calculate the mean velocity of each vertical line by three different velocity profile formula (that is, the law of the wall, Chiu's theory, Hu's theory), and then multiplied by each sub-area to obtain the discharge measurement values and compared with the true values (obtained by the direct integration mode) to obtain the accuracy of discharge. The research results show that the discharge measurement values obtained by Chiu's theory are closer to the true value, while the maximum error is the law of the wall. The main reason is that the law of the wall can't describe the maximum velocity occurred in underwater. In addition, the results also show

  8. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  9. Evaluating the accuracy of transcribed clinical data.

    PubMed Central

    Wilton, R.; Pennisi, A. J.

    1993-01-01

    This study evaluated the accuracy of data transcribed into a computer-stored record from a handwritten listing of pediatric immunizations. The immunization records of 459 children seen in the UCLA Children's Health Center in March, 1993 were transcribed into a clinical computer system on an ongoing basis. Of these records, 27 (5.9%) were subsequently found to be inaccurate. Reasons for inaccuracy in the transcribed records included incomplete written records, incomplete transcription of written records, and unavailability of immunization records from multiple health-care providers. The utility of a computer-stored clinical record may be adversely affected by unavoidable inaccuracies in transcribed clinical data. PMID:8130478

  10. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries. PMID:26300353

  11. Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural network

    PubMed Central

    Faraggi, Eshel; Xue, Bin; Zhou, Yaoqi

    2008-01-01

    This paper attempts to increase the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins through improved learning. Most methods developed for improving the backpropagation algorithm of artificial neural networks are limited to small neural networks. Here, we introduce a guided-learning method suitable for networks of any size. The method employs a part of the weights for guiding and the other part for training and optimization. We demonstrate this technique by predicting residue solvent accessibility and real-value backbone torsion angles of proteins. In this application, the guiding factor is designed to satisfy the intuitive condition that for most residues, the contribution of a residue to the structural properties of another residue is smaller for greater separation in the protein-sequence distance between the two residues. We show that the guided-learning method makes a 2-4% reduction in ten-fold cross-validated mean absolute errors (MAE) for predicting residue solvent accessibility and backbone torsion angles, regardless of the size of database, the number of hidden layers and the size of input windows. This together with introduction of two-layer neural network with a bipolar activation function leads to a new method that has a MAE of 0.11 for residue solvent accessibility, 36° for ψ, and 22° for ϕ. The method is available as a Real-SPINE 3.0 server in http://sparks.informatics.iupui.edu. PMID:18704931

  12. A comparison of the diagnostic accuracy of the AD8 and BCAT-SF in identifying dementia and mild cognitive impairment in long-term care residents.

    PubMed

    Mansbach, William E; Mace, Ryan A

    2016-09-01

    We compared the accuracy of the Brief Cognitive Assessment Tool-Short Form (BCAT-SF) and AD8 in identifying mild cognitive impairment (MCI) and dementia among long-term care residents. Psychometric analyses of 357 long-term care residents (n = 228, nursing home; n = 129, assisted living) in Maryland referred for neuropsychological evaluation evidenced robust internal consistency reliability and construct validity for the BCAT-SF. Furthermore, hierarchical logistic regression and receiver operating characteristic curve analyses demonstrated superior predictive validity for the BCAT-SF in identifying MCI and dementia relative to the AD8. In contrast, previously reported psychometric properties or cut scores for the AD8 could not be cross-validated in this long-term care sample. Based on these findings, the BCAT-SF appears to be a more reliable and valid screening instrument than the AD8 for rapidly identifying MCI and dementia in long-term care residents. PMID:26873431

  13. Millimeter accuracy satellites for two color ranging

    NASA Technical Reports Server (NTRS)

    Degnan, John J.

    1993-01-01

    The principal technical challenge in designing a millimeter accuracy satellite to support two color observations at high altitudes is to provide high optical cross-section simultaneously with minimal pulse spreading. In order to address this issue, we provide, a brief review of some fundamental properties of optical retroreflectors when used in spacecraft target arrays, develop a simple model for a spherical geodetic satellite, and use the model to determine some basic design criteria for a new generation of geodetic satellites capable of supporting millimeter accuracy two color laser ranging. We find that increasing the satellite diameter provides: a larger surface area for additional cube mounting thereby leading to higher cross-sections; and makes the satellite surface a better match for the incoming planar phasefront of the laser beam. Restricting the retroreflector field of view (e.g. by recessing it in its holder) limits the target response to the fraction of the satellite surface which best matches the optical phasefront thereby controlling the amount of pulse spreading. In surveying the arrays carried by existing satellites, we find that European STARLETTE and ERS-1 satellites appear to be the best candidates for supporting near term two color experiments in space.

  14. Curation accuracy of model organism databases.

    PubMed

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819

  15. High accuracy electronic material level sensor

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.

  16. High accuracy electronic material level sensor

    DOEpatents

    McEwan, T.E.

    1997-03-11

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.

  17. Does reader visual fatigue impact interpretation accuracy?

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Berbaum, Kevin S.

    2010-02-01

    To measure the impact of reader of reader visual fatigue by assessing symptoms, the ability to keep the eye focused on the display and diagnostic accuracy. Twenty radiology residents and 20 radiologists were given a diagnostic performance test containing 60 skeletal radiographic studies, half with fractures, before and after a day of clinical reading. Diagnostic accuracy was measured using area under the proper binormal curve (AUC). Error in visual accommodation was measured before and after each test session and subjects completed the Swedish Occupational Fatigue Inventory (SOFI) and the oculomotor strain subscale of the Simulator Sickness Questionnaire (SSQ) before each session. Average AUC was 0.89 for before work test and 0.85 for the after work test, (F(1,36) = 4.15, p = 0.049 < 0.05). There was significantly greater error in accommodation after the clinical workday (F(1,14829) = 7.81, p = 0.005 < 0.01), and after the reading test (F(1,14829) = 839.33, p < 0.0001). SOFI measures of lack of energy, physical discomfort and sleepiness were higher after a day of clinical reading (p < 0.05). The SSQ measure of oculomotor symptoms (i.e., difficulty focusing, blurred vision) was significantly higher after a day of clinical reading (F(1,75) = 20.38, p < 0.0001). Radiologists are visually fatigued by their clinical reading workday. This reduces their ability to focus on diagnostic images and to accurately interpret them.

  18. Accuracy assessment of landslide prediction models

    NASA Astrophysics Data System (ADS)

    Othman, A. N.; Mohd, W. M. N. W.; Noraini, S.

    2014-02-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones.

  19. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  20. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction. PMID:22047326

  1. Accuracy of the blood pressure measurement.

    PubMed

    Rabbia, F; Del Colle, S; Testa, E; Naso, D; Veglio, F

    2006-08-01

    Blood pressure measurement is the cornerstone for the diagnosis, the treatment and the research on arterial hypertension, and all of the decisions about one of these single aspects may be dramatically influenced by the accuracy of the measurement. Over the past 20 years or so, the accuracy of the conventional Riva-Rocci/Korotkoff technique of blood pressure measurement has been questioned and efforts have been made to improve the technique with automated devices. In the same period, recognition of the phenomenon of white coat hypertension, whereby some individuals with an apparent increase in blood pressure have normal, or reduced, blood pressures when measurement is repeated away from the medical environment, has focused attention on methods of measurement that provide profiles of blood pressure behavior rather than relying on isolated measurements under circumstances that may in themselves influence the level of blood pressure recorded. These methodologies have included repeated measurements of blood pressure using the traditional technique, self-measurement of blood pressure in the home or work place, and ambulatory blood pressure measurement using innovative automated devices. The purpose of this review to serve as a source of practical information about the commonly used methods for blood pressure measurement: the traditional Riva-Rocci method and the automated methods. PMID:17016412

  2. Dust trajectory sensor: Accuracy and data analysis

    NASA Astrophysics Data System (ADS)

    Xie, J.; Sternovsky, Z.; Grün, E.; Auer, S.; Duncan, N.; Drake, K.; Le, H.; Horanyi, M.; Srama, R.

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008), 10.1063/1.2960566] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010), 10.1016/j.nima.2010.06.091]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  3. Measuring the Accuracy of Diagnostic Systems

    NASA Astrophysics Data System (ADS)

    Swets, John A.

    1988-06-01

    Diagnostic systems of several kinds are used to distinguish between two classes of events, essentially ``signals'' and ``noise.'' For then, analysis in terms of the ``relative operating characteristic'' of signal detection theory provides a precise and valid measure of diagnostic accuracy. It is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performances of diverse systems on a common, easily interpreted scale. Representative values of this measure are reported here for systems in medical imaging, materials testing, weather forecasting, information retrieval, polygraph lie detection, and aptitude testing. Though the measure itself is sound, the values obtained from tests of diagnostic systems often require qualification because the test data on which they are based are of unsure quality. A common set of problems in testing is faced in all fields. How well these problems are handled, or can be handled in a given field, determines the degree of confidence that can be placed in a measured value of accuracy. Some fields fare much better than others.

  4. Approaching chemical accuracy with quantum Monte Carlo.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2012-03-28

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space. PMID:22462844

  5. Using the concept of Chou's pseudo amino acid composition to predict protein solubility: an approach with entropies in information theory.

    PubMed

    Xiaohui, Niu; Nana, Li; Jingbo, Xia; Dingyan, Chen; Yuehua, Peng; Yang, Xiao; Weiquan, Wei; Dongming, Wang; Zengzhen, Wang

    2013-09-01

    Protein solubility plays a major role and has strong implication in the proteomics. Predicting the propensity of a protein to be soluble or to form inclusion body is a fundamental and not fairly resolved problem. In order to predict the protein solubility, almost 10,000 protein sequences were downloaded from NCBI. Then the sequences were eliminated for the high homologous similarity by CD-HIT. Thus, there were 5692 sequences remained. Based on protein sequences, amino acid and dipeptide compositions were generally extracted to predict protein solubility. In this study, the entropy in information theory was introduced as another predictive factor in the model. Experiments involving nine different feature vector combinations, including the above-mentioned three kinds of factors, were conducted with support vector machines (SVMs) as prediction engine. Each combination was evaluated by re-substitution test and 10-fold cross-validation test. According to the evaluation results, the accuracies and Matthew's Correlation Coefficient (MCC) values were boosted by the introduction of the entropy. The best combination was the one with amino acid, dipeptide compositions and their entropies. Its accuracy reached 90.34% and Matthew's Correlation Coefficient (MCC) value was 0.7494 in re-substitution test, while 88.12% and 0.7945 respectively for 10-fold cross-validation. In conclusion, the introduction of the entropy significantly improved the performance of the predictive method. PMID:23524162

  6. Statistical fitting accuracy in photon correlation spectroscopy

    NASA Technical Reports Server (NTRS)

    Shaumeyer, J. N.; Briggs, Matthew E.; Gammon, Robert W.

    1993-01-01

    Continuing our experimental investigation of the fitting accuracy associated with photon correlation spectroscopy, we collect 150 correlograms of light scattered at 90 deg from a thermostated sample of 91-nm-diameter, polystyrene latex spheres in water. The correlograms are taken with two correlators: one with linearly spaced channels and one with geometrically spaced channels. Decay rates are extracted from the single-exponential correlograms with both nonlinear least-squares fits and second-order cumulant fits. We make several statistical comparisons between the two fitting techniques and verify an earlier result that there is no sample-time dependence in the decay rate errors. We find, however, that the two fitting techniques give decay rates that differ by 1 percent.

  7. Laser focus positioning method with submicrometer accuracy.

    PubMed

    Alexeev, Ilya; Strauss, Johannes; Gröschl, Andreas; Cvecek, Kristian; Schmidt, Michael

    2013-01-20

    Accurate positioning of a sample is one of the primary challenges in laser micromanufacturing. There are a number of methods that allow detection of the surface position; however, only a few of them use the beam of the processing laser as a basis for the measurement. Those methods have an advantage that any changes in the processing laser beam can be inherently accommodated. This work describes a direct, contact-free method to accurately determine workpiece position with respect to the structuring laser beam focal plane based on nonlinear harmonic generation. The method makes workpiece alignment precise and time efficient due to ease of automation and provides the repeatability and accuracy of the surface detection of less than 1 μm. PMID:23338188

  8. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  9. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  10. Accuracy evaluation of residual stress measurements

    SciTech Connect

    Yerman, J.A.; Kroenke, W.C.; Long, W.H.

    1996-05-01

    The accuracy of residual stress measurement techniques is difficult to assess due to the lack of available reference standards. To satisfy the need for reference standards, two specimens were designed and developed to provide known stress magnitudes and distributions: one with a uniform stress distribution and one with a nonuniform linear stress distribution. A reusable, portable load fixture was developed for use with each of the two specimens. Extensive bench testing was performed to determine if the specimens provide desired known stress magnitudes and distributions and stability of the known stress with time. The testing indicated that the nonuniform linear specimen and load fixture provided the desired known stress magnitude and distribution but that modifications were required for the uniform stress specimen. A trial use of the specimens and load fixtures using hole drilling was successful.

  11. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  12. Quantum mechanical calculations to chemical accuracy

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1991-01-01

    The accuracy of current molecular-structure calculations is illustrated with examples of quantum mechanical solutions for chemical problems. Two approaches are considered: (1) the coupled-cluster singles and doubles (CCSD) with a perturbational estimate of the contribution of connected triple excitations, or CCDS(T); and (2) the multireference configuration-interaction (MRCI) approach to the correlation problem. The MRCI approach gains greater applicability by means of size-extensive modifications such as the averaged-coupled pair functional approach. The examples of solutions to chemical problems include those for C-H bond energies, the vibrational frequencies of O3, identifying the ground state of Al2 and Si2, and the Lewis-Rayleigh afterglow and the Hermann IR system of N2. Accurate molecular-wave functions can be derived from a combination of basis-set saturation studies and full configuration-interaction calculations.

  13. Accuracy of the Cloud Integrating Nephelometer

    NASA Technical Reports Server (NTRS)

    Gerber, Hermann E.

    2004-01-01

    Potential error sources for measurements with the Cloud Integrating Nephelometer (CIN) are discussed and analyzed, including systematic errors of the measurement approach, flow and particle-trajectory deviations at flight velocity, ice-crystal breakup on probe surfaces, and errors in calibration and developing scaling constants. It is concluded that errors are minimal, and that the accuracy of the CIN should be close to the systematic behavior of the CIN derived in Gerber et al (2000). Absolute calibration of the CIN with a transmissometer operating co-located in a mountain-top cloud shows that the earlier scaling constant for the optical extinction coefficient obtained by other means is within 5% of the absolute calibration value, and that the CIN measurements on the Citation aircraft flights during the CRYSTAL-FACE study are accurate.

  14. Positioning accuracy of the neurotron 1000

    SciTech Connect

    Cox, R.S.; Murphy, M.J.

    1995-12-31

    The Neuotron 1000 is a novel treatment machine under development for frameless stereotaxic radiosurgery that consists of a compact X-band accelerator mounted on a robotic arm. The therapy beam is guided to the lesion by an imaging system, which included two diagnostic x-ray cameras that view the patient during treatment. Patient position and motion are measured by the imaging system and appropriate corrections are communicated in real time to the robotic arm for beam targeting and motion tracking. The three tests reported here measured the pointing accuracy of the therapy beam and the present capability of the imaging guidance system. The positioning and pointing test measured the ability of the robotic arm to direct the beam through a test isocenter from arbitrary arm positions. The test isocenter was marked by a small light-sensitive crystal and the beam axis was simulated by a laser.

  15. Copper disk pyrheliometer of high accuracy

    SciTech Connect

    Hsieh, C.K.; Wang, X.A.

    1983-01-01

    A copper disk pyrheliometer has been designed and constructed that utilizes a new methodology to measure solar radiation. By operating the shutter of the instrument and measuring the heating and cooling rates of the sensor at the very moment when the sensor is at the same temperature, the solar radiation can be accurately determined with these rates. The method is highly accurate and is shown to be totally independent of the loss coefficient in the measurement. The pyrheliometer has been tested using a standard irradiance lamp in the laboratory. The uncertainty of the instrument is identified to be +- 0.61%. Field testing was also conducted by comparing data with that of a calibrated (Eppley) Normal Incidence Pyrheliometer. This paper spells out details of the construction and testing of the instrument; the analysis underlying the methodology was also covered in detail. Because of the high accuracy, the instrument is considered to be well suited for a bench standard for measurement of solar radiation.

  16. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  17. Accuracy Limitations in Long-Trace Profilometry

    SciTech Connect

    Takacs, Peter Z.; Qian Shinan

    2004-05-12

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface 'macroroughness' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface 'macroroughness' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  18. Guiding Center Equations of High Accuracy

    SciTech Connect

    R.B. White, G. Spizzo and M. Gobbin

    2013-03-29

    Guiding center simulations are an important means of predicting the effect of resistive and ideal magnetohydrodynamic instabilities on particle distributions in toroidal magnetically confined thermonuclear fusion research devices. Because saturated instabilities typically have amplitudes of δ B/B of a few times 10-4 numerical accuracy is of concern in discovering the effect of mode particle resonances. We develop a means of following guiding center orbits which is greatly superior to the methods currently in use. In the presence of ripple or time dependent magnetic perturbations both energy and canonical momentum are conserved to better than one part in 1014, and the relation between changes in canonical momentum and energy is also conserved to very high order.

  19. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  20. On the Accuracy of the MINC approximation

    SciTech Connect

    Lai, C.H.; Pruess, K.; Bodvarsson, G.S.

    1986-02-01

    The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.

  1. Meteor orbit determination with improved accuracy

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria

    2015-08-01

    Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217

  2. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  3. Food Label Accuracy of Common Snack Foods

    PubMed Central

    Jumpertz, Reiner; Venti, Colleen A; Le, Duc Son; Michaels, Jennifer; Parrington, Shannon; Krakoff, Jonathan; Votruba, Susanne

    2012-01-01

    Nutrition labels have raised awareness of the energetic value of foods, and represent for many a pivotal guideline to regulate food intake. However, recent data have created doubts on label accuracy. Therefore we tested label accuracy for energy and macronutrient content of prepackaged energy-dense snack food products. We measured “true” caloric content of 24 popular snack food products in the U.S. and determined macronutrient content in 10 selected items. Bomb calorimetry and food factors were used to estimate energy content. Macronutrient content was determined according to Official Methods of Analysis. Calorimetric measurements were performed in our metabolic laboratory between April 20th and May 18th and macronutrient content was measured between September 28th and October 7th of 2010. Serving size, by weight, exceeded label statements by 1.2% [median] (25th percentile −1.4, 75th percentile 4.3, p=0.10). When differences in serving size were accounted for, metabolizable calories were 6.8 kcal (0.5, 23.5, p=0.0003) or 4.3% (0.2, 13.7, p=0.001) higher than the label statement. In a small convenience sample of the tested snack foods, carbohydrate content exceeded label statements by 7.7% (0.8, 16.7, p=0.01); however fat and protein content were not significantly different from label statements (−12.8% [−38.6, 9.6], p=0.23; 6.1% [−6.1, 17.5], p=0.32). Carbohydrate content explained 40% and serving size an additional 55% of the excess calories. Among a convenience sample of energy-dense snack foods, caloric content is higher than stated on the nutrition labels, but overall well within FDA limits. This discrepancy may be explained by inaccurate carbohydrate content and serving size. PMID:23505182

  4. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  5. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  6. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    SciTech Connect

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi; Liu, Jie; Lan, Ke

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage error of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.

  7. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    ERIC Educational Resources Information Center

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  8. Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy.

    PubMed

    Combrisson, Etienne; Jerbi, Karim

    2015-07-30

    Machine learning techniques are increasingly used in neuroscience to classify brain signals. Decoding performance is reflected by how much the classification results depart from the rate achieved by purely random classification. In a 2-class or 4-class classification problem, the chance levels are thus 50% or 25% respectively. However, such thresholds hold for an infinite number of data samples but not for small data sets. While this limitation is widely recognized in the machine learning field, it is unfortunately sometimes still overlooked or ignored in the emerging field of brain signal classification. Incidentally, this field is often faced with the difficulty of low sample size. In this study we demonstrate how applying signal classification to Gaussian random signals can yield decoding accuracies of up to 70% or higher in two-class decoding with small sample sets. Most importantly, we provide a thorough quantification of the severity and the parameters affecting this limitation using simulations in which we manipulate sample size, class number, cross-validation parameters (k-fold, leave-one-out and repetition number) and classifier type (Linear-Discriminant Analysis, Naïve Bayesian and Support Vector Machine). In addition to raising a red flag of caution, we illustrate the use of analytical and empirical solutions (binomial formula and permutation tests) that tackle the problem by providing statistical significance levels (p-values) for the decoding accuracy, taking sample size into account. Finally, we illustrate the relevance of our simulations and statistical tests on real brain data by assessing noise-level classifications in Magnetoencephalography (MEG) and intracranial EEG (iEEG) baseline recordings. PMID:25596422

  9. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of advertising. 740.2 Section 740.2 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use...

  10. The accuracy of portable peak flow meters.

    PubMed Central

    Miller, M R; Dickinson, S A; Hitchings, D J

    1992-01-01

    BACKGROUND: The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. METHODS: The response of several portable PEF meters was tested with absolute standards of flow generated by a computer driven, servo controlled pump and their response was compared with that of a pneumotachograph. RESULTS: For each device tested the readings were highly repeatable to within the limits of accuracy with which the pointer position can be assessed by eye. The between instrument variation in reading for six identical devices expressed as a 95% confidence limit was, on average across the range of flows, +/- 8.5 l/min for the Mini-Wright, +/- 7.9 l/min for the Vitalograph, and +/- 6.4 l/min for the Ferraris. PEF meters based on the Wright meter all had similar error profiles with overreading of up to 80 l/min in the mid flow range from 300 to 500 l/min. This overreading was greatest for the Mini-Wright and Ferraris devices, and less so for the original Wright and Vitalograph meters. A Micro-Medical Turbine meter was accurate up to 400 l/min and then began to underread by up to 60 l/min at 720 l/min. For the low range devices the Vitalograph device was accurate to within 10 l/min up to 200 l/min, with the Mini-Wright overreading by up to 30 l/min above 150 l/min. CONCLUSION: Although the Mini-Wright, Ferraris, and Vitalograph meters gave remarkably repeatable results their error profiles for the full range meters will lead to important errors in recording PEF variability. This may lead to incorrect diagnosis and bias in implementing strategies of asthma treatment based on PEF measurement. PMID:1465746

  11. Accuracy of commercial geocoding: assessment and implications

    PubMed Central

    Whitsel, Eric A; Quibrera, P Miguel; Smith, Richard L; Catellier, Diane J; Liao, Duanping; Henley, Amanda C; Heiss, Gerardo

    2006-01-01

    Background Published studies of geocoding accuracy often focus on a single geographic area, address source or vendor, do not adjust accuracy measures for address characteristics, and do not examine effects of inaccuracy on exposure measures. We addressed these issues in a Women's Health Initiative ancillary study, the Environmental Epidemiology of Arrhythmogenesis in WHI. Results Addresses in 49 U.S. states (n = 3,615) with established coordinates were geocoded by four vendors (A-D). There were important differences among vendors in address match rate (98%; 82%; 81%; 30%), concordance between established and vendor-assigned census tracts (85%; 88%; 87%; 98%) and distance between established and vendor-assigned coordinates (mean ρ [meters]: 1809; 748; 704; 228). Mean ρ was lowest among street-matched, complete, zip-coded, unedited and urban addresses, and addresses with North American Datum of 1983 or World Geodetic System of 1984 coordinates. In mixed models restricted to vendors with minimally acceptable match rates (A-C) and adjusted for address characteristics, within-address correlation, and among-vendor heteroscedasticity of ρ, differences in mean ρ were small for street-type matches (280; 268; 275), i.e. likely to bias results relying on them about equally for most applications. In contrast, differences between centroid-type matches were substantial in some vendor contrasts, but not others (5497; 4303; 4210) pinteraction < 10-4, i.e. more likely to bias results differently in many applications. The adjusted odds of an address match was higher for vendor A versus C (odds ratio = 66, 95% confidence interval: 47, 93), but not B versus C (OR = 1.1, 95% CI: 0.9, 1.3). That of census tract concordance was no higher for vendor A versus C (OR = 1.0, 95% CI: 0.9, 1.2) or B versus C (OR = 1.1, 95% CI: 0.9, 1.3). Misclassification of a related exposure measure – distance to the nearest highway – increased with mean ρ and in the absence of confounding, non

  12. Accuracy of the vivofit activity tracker.

    PubMed

    Alsubheen, Sana'a A; George, Amanda M; Baker, Alicia; Rohr, Linda E; Basset, Fabien A

    2016-08-01

    The purpose of this study was to examine the accuracy of the vivofit activity tracker in assessing energy expenditure and step count. Thirteen participants wore the vivofit activity tracker for five days. Participants were required to independently perform 1 h of self-selected activity each day of the study. On day four, participants came to the lab to undergo BMR and a treadmill-walking task (TWT). On day five, participants completed 1 h of office-type activities. BMR values estimated by the vivofit were not significantly different from the values measured through indirect calorimetry (IC). The vivofit significantly underestimated EE for treadmill walking, but responded to the differences in the inclination. Vivofit underestimated step count for level walking but provided an accurate estimate for incline walking. There was a strong correlation between EE and the exercise intensity. The vivofit activity tracker is on par with similar devices and can be used to track physical activity. PMID:27266422

  13. Accuracy of clinical diagnosis in knee arthroscopy.

    PubMed Central

    Brooks, Stuart; Morgan, Mamdouh

    2002-01-01

    A prospective study of 238 patients was performed in a district general hospital to assess current diagnostic accuracy rates and to ascertain the use and the effectiveness of magnetic resonance imaging (MRI) scanning in reducing the number of negative arthroscopies. The pre-operative diagnosis of patients listed for knee arthroscopy was medial meniscus tear 94 (40%) and osteoarthritis 59 (25%). MRI scans were requested in 57 patients (24%) with medial meniscus tear representing 65% (37 patients). The correlation study was done between pre-operative diagnosis, MRI and arthroscopic diagnosis. Clinical diagnosis was as accurate as the MRI with 79% agreement between the preoperative diagnosis and arthroscopy compared to 77% agreement between MRI scan and arthroscopy. There was no evidence, in this study, that MRI scan can reduce the number of negative arthroscopies. Four normal MRI scans had positive arthroscopic diagnosis (two torn medial meniscus, one torn lateral meniscus and one chondromalacia patella). Out of 240 arthroscopies, there were only 10 normal knees (negative arthroscopy) representing 4% of the total number of knee arthroscopies; one patient of those 10 cases had MRI scan with ACL rupture diagnosis. Images Figure 1 Figure 2 PMID:12215031

  14. Improving DNA sequencing accuracy and throughput

    SciTech Connect

    Nelson, D.O. |

    1996-12-31

    LLNL is beginning to explore statistical approaches to the problem of determining the DNA sequence underlying data obtained from fluorescence-based gel electrophoresis. Among the features of this problem that make it interesting to statisticians include: (1) the underlying mechanics of electrophoresis is quite complex and still not completely understood; (2) the yield of fragments of any given size can be quite small and variable; (3) the mobility of fragments of a given size can depend on the terminating base; (4) the data consists of samples from one or more continuous, non-stationary signals; (5) boundaries between segments generated by distinct elements of the underlying sequence are ill-defined or nonexistent in the signal; and (6) the sampling rate of the signal greatly exceeds the rate of evolution of the underlying discrete sequence. Current approaches to base calling address only some of these issues, and usually in a heuristic, ad hoc way. In this article we describe some of our initial efforts towards increasing base calling accuracy and throughput by providing a rational, statistical foundation to the process of deducing sequence from signal. 31 refs., 12 figs.

  15. High accuracy wall thickness loss monitoring

    NASA Astrophysics Data System (ADS)

    Gajdacsi, Attila; Cegla, Frederic

    2014-02-01

    Ultrasonic inspection of wall thickness in pipes is a standard technique applied widely in the petrochemical industry. The potential precision of repeat measurements with permanently installed ultrasonic sensors however significantly surpasses that of handheld sensors as uncertainties associated with coupling fluids and positional offsets are eliminated. With permanently installed sensors the precise evaluation of very small wall loss rates becomes feasible in a matter of hours. The improved accuracy and speed of wall loss rate measurements can be used to evaluate and develop more effective mitigation strategies. This paper presents an overview of factors causing variability in the ultrasonic measurements which are then systematically addressed and an experimental setup with the best achievable stability based on these considerations is presented. In the experimental setup galvanic corrosion is used to induce predictable and very small wall thickness loss. Furthermore, it is shown that the experimental measurements can be used to assess the effect of reduced wall loss that is produced by the injection of corrosion inhibitor. The measurements show an estimated standard deviation of about 20nm, which in turn allows us to evaluate the effect and behaviour of corrosion inhibitors within less than an hour.

  16. The accuracy of a voice vote

    PubMed Central

    Titze, Ingo R.; Palaparthi, Anil

    2014-01-01

    The accuracy of a voice vote was addressed by systematically varying group size, individual voter loudness, and words that are typically used to express agreement or disagreement. Five judges rated the loudness of two competing groups in A-B comparison tasks. Acoustic analysis was performed to determine the sound energy level of each word uttered by each group. Results showed that individual voter differences in energy level can grossly alter group loudness and bias the vote. Unless some control is imposed on the sound level of individual voters, it is difficult to establish even a two-thirds majority, much less a simple majority. There is no symmetry in the bias created by unequal sound production of individuals. Soft voices do not bias the group loudness much, but loud voices do. The phonetic balance of the two words chosen (e.g., “yea” and “nay” as opposed to “aye” and “no”) seems to be less of an issue. PMID:24437776

  17. Lung Ultrasound Diagnostic Accuracy in Neonatal Pneumothorax

    PubMed Central

    Copetti, Roberto

    2016-01-01

    Background. Pneumothorax (PTX) still remains a common cause of morbidity in critically ill and ventilated neonates. At the present time, lung ultrasound (LUS) is not included in the diagnostic work-up of PTX in newborns despite of excellent evidence of reliability in adults. The aim of this study was to compare LUS, chest X-ray (CXR), and chest transillumination (CTR) for PTX diagnosis in a group of neonates in which the presence of air in the pleural space was confirmed. Methods. In a 36-month period, 49 neonates with respiratory distress were enrolled in the study. Twenty-three had PTX requiring aspiration or chest drainage (birth weight 2120 ± 1640 grams; gestational age = 36 ± 5 weeks), and 26 were suffering from respiratory distress without PTX (birth weight 2120 ± 1640 grams; gestational age = 34 ± 5 weeks). Both groups had done LUS, CTR, and CXR. Results. LUS was consistent with PTX in all 23 patients requiring chest aspiration. In this group, CXR did not detect PTX in one patient while CTR did not detect it in 3 patients. Sensitivity and specificity in diagnosing PTX were therefore 1 for LUS, 0.96 and 1 for CXR, and 0.87 and 0.96 for CTR. Conclusions. Our results confirm that also in newborns LUS is at least as accurate as CXR in the diagnosis of PTX while CTR has a lower accuracy.

  18. Time and position accuracy using codeless GPS

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Jefferson, D. C.; Lichten, S. M.; Thomas, J. B.; Vigue, Y.; Young, L. E.

    1994-01-01

    The Global Positioning System has allowed scientists and engineers to make measurements having accuracy far beyond the original 15 meter goal of the system. Using global networks of P-Code capable receivers and extensive post-processing, geodesists have achieved baseline precision of a few parts per billion, and clock offsets have been measured at the nanosecond level over intercontinental distances. A cloud hangs over this picture, however. The Department of Defense plans to encrypt the P-Code (called Anti-Spoofing, or AS) in the fall of 1993. After this event, geodetic and time measurements will have to be made using codeless GPS receivers. However, there appears to be a silver lining to the cloud. In response to the anticipated encryption of the P-Code, the geodetic and GPS receiver community has developed some remarkably effective means of coping with AS without classified information. We will discuss various codeless techniques currently available and the data noise resulting from each. We will review some geodetic results obtained using only codeless data, and discuss the implications for time measurements. Finally, we will present the status of GPS research at JPL in relation to codeless clock measurements.

  19. The Good Judge of Personality: Characteristics, Behaviors, and Observer Accuracy

    PubMed Central

    Letzring, Tera D.

    2008-01-01

    Personality characteristics and behaviors related to judgmental accuracy following unstructured interactions among previously unacquainted triads were examined. Judgmental accuracy was related to social skill, agreeableness, and adjustment. Accuracy of observers of the interactions was positively related to the number of good judges in the interaction, which implies that the personality and behaviors of the judge are important for creating a situation in which targets will reveal relevant personality cues. Furthermore, the finding that observer accuracy was positively related to the number of good judge partners suggests that judgmental accuracy is based on more than detection and utilization skills of the judge. PMID:19649134

  20. Accuracy of schemes with nonuniform meshes for compressible fluid flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1985-01-01

    The accuracy of the space discretization for time-dependent problems when a nonuniform mesh is used is considered. Many schemes reduce to first-order accuracy while a popular finite volume scheme is even inconsistent for general grids. This accuracy is based on physical variables. However, when accuracy is measured in computational variables then second-order accuracy can be obtained. This is meaningful only if the mesh accurately reflects the properties of the solution. In addition, the stability properties of some improved accurate schemes are analyzed and it can be shown that they also allow for larger time steps when Runge-Kutta type methods are used to advance in time.

  1. Online Medical Device Use Prediction: Assessment of Accuracy.

    PubMed

    Maktabi, Marianne; Neumuth, Thomas

    2016-01-01

    Cost-intensive units in the hospital such as the operating room require effective resource management to improve surgical workflow and patient care. To maximize efficiency, online management systems should accurately forecast the use of technical resources (medical instruments and devices). We compare several surgical activities like using the coagulator based on spectral analysis and application of a linear time variant system to obtain future technical resource usage. In our study we examine the influence of the duration of usage and total usage rate of the technical equipment to the prediction performance in several time intervals. A cross validation was conducted with sixty-two neck dissections to evaluate the prediction performance. The performance of a use-state-forecast does not change whether duration is considered or not, but decreases with lower total usage rates of the observed instruments. A minimum number of surgical workflow recordings (here: 62) and >5 minute time intervals for use-state forecast are required for applying our described method to surgical practice. The work presented here might support the reduction of resource conflicts when resources are shared among different operating rooms. PMID:27577445

  2. On the Standardization of Vertical Accuracy Figures in Dems

    NASA Astrophysics Data System (ADS)

    Casella, V.; Padova, B.

    2013-01-01

    Digital Elevation Models (DEMs) play a key role in hydrological risk prevention and mitigation: hydraulic numeric simulations, slope and aspect maps all heavily rely on DEMs. Hydraulic numeric simulations require the used DEM to have a defined accuracy, in order to obtain reliable results. Are the DEM accuracy figures clearly and uniquely defined? The paper focuses on some issues concerning DEM accuracy definition and assessment. Two DEM accuracy definitions can be found in literature: accuracy at the interpolated point and accuracy at the nodes. The former can be estimated by means of randomly distributed check points, while the latter by means of check points coincident with the nodes. The two considered accuracy figures are often treated as equivalent, but they aren't. Given the same DEM, assessing it through one or the other approach gives different results. Our paper performs an in-depth characterization of the two figures and proposes standardization coefficients.

  3. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  4. Kinematics of a striking task: accuracy and speed-accuracy considerations.

    PubMed

    Parrington, Lucy; Ball, Kevin; MacMahon, Clare

    2015-01-01

    Handballing in Australian football (AF) is the most efficient passing method, yet little research exists examining technical factors associated with accuracy. This study had three aims: (a) To explore the kinematic differences between accurate and inaccurate handballers, (b) to compare within-individual successful (hit target) and unsuccessful (missed target) handballs and (c) to assess handballing when both accuracy and speed of ball-travel were combined using a novel approach utilising canonical correlation analysis. Three-dimensional data were collected on 18 elite AF players who performed handballs towards a target. More accurate handballers exhibited a significantly straighter hand-path, slower elbow angular velocity and smaller elbow range of motion (ROM) compared to the inaccurate group. Successful handballs displayed significantly larger trunk ROM, maximum trunk rotation velocity and step-angle and smaller elbow ROM in comparison to the unsuccessful handballs. The canonical model explained 73% of variance shared between the variable sets, with a significant relationship found between hand-path, elbow ROM and maximum elbow angular velocity (predictors) and hand-speed and accuracy (dependant variables). Interestingly, not all parameters were the same across each of the analyses, with technical differences between inaccurate and accurate handballers different from those between successful and unsuccessful handballs in the within-individual analysis. PMID:25079111

  5. Effect of atmospherics on beamforming accuracy

    NASA Technical Reports Server (NTRS)

    Alexander, Richard M.

    1990-01-01

    Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as

  6. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  7. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  8. Testing the accuracy of synthetic stellar libraries

    NASA Astrophysics Data System (ADS)

    Martins, Lucimara P.; Coelho, Paula

    2007-11-01

    One of the main ingredients of stellar population synthesis models is a library of stellar spectra. Both empirical and theoretical libraries are used for this purpose, and the question about which one is preferable is still debated in the literature. Empirical and theoretical libraries are being improved significantly over the years, and many libraries have become available lately. However, it is not clear in the literature what are the advantages of using each of these new libraries, and how far behind models are compared to observations. Here we compare in detail some of the major theoretical libraries available in the literature with observations, aiming at detecting weaknesses and strengths from the stellar population modelling point of view. Our test is twofold: we compared model predictions and observations for broad-band colours and for high-resolution spectral features. Concerning the broad-band colours, we measured the stellar colour given by three recent sets of model atmospheres and flux distributions, and compared them with a recent UBVRIJHK calibration which is mostly based on empirical data. We found that the models can reproduce with reasonable accuracy the stellar colours for a fair interval in effective temperatures and gravities. The exceptions are (1) the U - B colour, where the models are typically redder than the observations, and (2) the very cool stars in general (V - K >~ 3). Castelli & Kurucz is the set of models that best reproduce the bluest colours (U - B, B - V) while Gustafsson et al. and Brott & Hauschildt more accurately predict the visual colours. The three sets of models perform in a similar way for the infrared colours. Concerning the high-resolution spectral features, we measured 35 spectral indices defined in the literature on three high-resolution synthetic libraries, and compared them with the observed measurements given by three empirical libraries. The measured indices cover the wavelength range from ~3500 to ~8700Å. We

  9. Prediction of sumoylation sites in proteins using linear discriminant analysis.

    PubMed

    Xu, Yan; Ding, Ya-Xin; Deng, Nai-Yang; Liu, Li-Ming

    2016-01-15

    Sumoylation is a multifunctional post-translation modification (PTM) in proteins by the small ubiquitin-related modifiers (SUMOs), which have relations to ubiquitin in molecular structure. Sumoylation has been found to be involved in some cellular processes. It is very significant to identify the exact sumoylation sites in proteins for not only basic researches but also drug developments. Comparing with time exhausting experiment methods, it is highly desired to develop computational methods for prediction of sumoylation sites as a complement to experiment in the post-genomic age. In this work, three feature constructions (AAIndex, position-specific amino acid propensity and modification of composition of k-space amino acid pairs) and five different combinations of them were used to construct features. At last, 178 features were selected as the optimal features according to the Mathew's correlation coefficient values in 10-fold cross validation based on linear discriminant analysis. In 10-fold cross-validation on the benchmark dataset, the accuracy and Mathew's correlation coefficient were 86.92% and 0.6845. Comparing with those existing predictors, SUMO_LDA showed its better performance. PMID:26432000

  10. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy.

    PubMed

    Schaufele, Fred

    2014-03-15

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  11. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  12. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  13. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  14. Effects of facial transformations on accuracy of recognition.

    PubMed

    Terry, R L

    1994-08-01

    Alterations of facial features between the initial phase of a memory task and a later recognition test lower identification accuracy. The effects of leaving on, leaving off, adding, or removing targets' eyeglasses or beards on identification accuracy were examined in two experiments with American undergraduates. The removal of eyeglasses and either type of beard transformation, especially the addition of a beard, lowered identification accuracy. PMID:7967551

  15. Wavelength Calibration Accuracy for the STIS CCD and MAMA Modes

    NASA Astrophysics Data System (ADS)

    Pascucci, Ilaria; Hodge, Phil; Proffitt, Charles R.; Ayres, T.

    2011-03-01

    Two calibration programs were carried out to determine the accuracy of the wavelength solutions for the most used STIS CCD and MAMA modes after Servicing Mission 4. We report here on the analysis of this dataset and show that the STIS wavelength solution has not changed after SM4. We also show that a typical accuracy for the absolute wavelength zero-points is 0.1 pixels while the relative wavelength accuracy is 0.2 pixels.

  16. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  17. Accuracy of dental radiographs for caries detection.

    PubMed

    Keenan, James R; Keenan, Analia Veitz

    2016-06-01

    Data sourcesMedline, Embase, Cochrane Central and grey literature, complemented by cross-referencing from bibliographies. Diagnostic reviews were searched using the Medion database.Study selectionStudies reporting on the accuracy (sensitivity/specificity) of radiographic detection of primary carious lesions under clinical (in vivo) or in vitro conditions were included. The outcome of interest was caries detection using radiographs. The study also assessed the effect of the histologic lesion stage and included articles to assess the differences between primary or permanent teeth, if there had been improvements recently due to technical advances or radiographic methods, or if there are variations within studies (between examiners or applied radiographic techniques).Data extraction and synthesisData extraction was done by one reviewer first, using a piloted electronic spreadsheet and repeated independently by a second reviewer. Consensus was achieved by discussion. Data extraction followed guidelines from the Cochrane Collaboration. Risk of bias was assessed using QUADAS-2. Pooled sensitivity, specificity and diagnostic odds ratios (DORs) were calculated using random effects meta-analysis. Analyses were performed separately for occlusal and proximal lesions. Dentine lesions and cavitated lesions were analysed separately.Results947 articles were identified with the searches and 442 were analysed full text. 117 studies (13,375 teeth, 19,108 surfaces) were included. All studies were published in English. 24 studies were in vivo and 93 studies were in vitro. Risk of bias was found to be low in 23 and high in 94 studies. The pooled sensitivity for detecting any kind of occlusal carious lesions was 0.35 (95% CI : 0.31/40) and 0.41 (0.39/0.44) in clinical and in vitro studies respectively while the pooled specificity was 0.78 (0.73/0.83) and 0.70 (0.76/0.84). For the detection of any kind of proximal lesion the sensitivity in the clinical studies was 0.24 (CI 0.21/0/26) and

  18. Differential effects of self-monitoring attention, accuracy, and productivity.

    PubMed Central

    Maag, J W; Reid, R; DiGangi, S A

    1993-01-01

    Effects of self-monitoring on-task behavior, academic productivity, and academic accuracy were assessed with 6 elementary-school students with learning disabilities in their general education classroom using a mathematics task. Following baseline, the three self-monitoring conditions were introduced using a multiple schedule design during independent practice sessions. Although all three interventions yielded improvements in either arithmetic productivity, accuracy, or on-task behavior, self-monitoring academic productivity or accuracy was generally superior. Differential results were obtained across age groups: fourth graders' mathematics performance improved most when self-monitoring productivity, whereas sixth graders' performance improved most when self-monitoring accuracy. PMID:8407682

  19. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  20. Sound source localization identification accuracy: Level and duration dependencies.

    PubMed

    Yost, William A

    2016-07-01

    Sound source localization accuracy for noises was measured for sources in the front azimuthal open field mainly as a function of overall noise level and duration. An identification procedure was used in which listeners identify which loudspeakers presented a sound. Noises were filtered and differed in bandwidth and center frequency. Sound source localization accuracy depended on the bandwidth of the stimuli, and for the narrow bandwidths, accuracy depended on the filter's center frequency. Sound source localization accuracy did not depend on overall level or duration. PMID:27475204

  1. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  2. Optimized diagnostic model combination for improving diagnostic accuracy

    NASA Astrophysics Data System (ADS)

    Kunche, S.; Chen, C.; Pecht, M. G.

    Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed

  3. Improving the Accuracy of Computer-aided Diagnosis for Breast MR Imaging by Differentiating between Mass and Nonmass Lesions.

    PubMed

    Gallego-Ortiz, Cristina; Martel, Anne L

    2016-03-01

    Purpose To determine suitable features and optimal classifier design for a computer-aided diagnosis (CAD) system to differentiate among mass and nonmass enhancements during dynamic contrast material-enhanced magnetic resonance (MR) imaging of the breast. Materials and Methods Two hundred eighty histologically proved mass lesions and 129 histologically proved nonmass lesions from MR imaging studies were retrospectively collected. The institutional research ethics board approved this study and waived informed consent. Breast Imaging Reporting and Data System classification of mass and nonmass enhancement was obtained from radiologic reports. Image data from dynamic contrast-enhanced MR imaging were extracted and analyzed by using feature selection techniques and binary, multiclass, and cascade classifiers. Performance was assessed by measuring the area under the receiver operating characteristics curve (AUC), sensitivity, and specificity. Bootstrap cross validation was used to predict the best classifier for the classification task of mass and nonmass benign and malignant breast lesions. Results A total of 176 features were extracted. Feature relevance ranking indicated unequal importance of kinetic, texture, and morphologic features for mass and nonmass lesions. The best classifier performance was a two-stage cascade classifier (mass vs nonmass followed by malignant vs benign classification) (AUC, 0.91; 95% confidence interval (CI): 0.88, 0.94) compared with one-shot classifier (ie, all benign vs malignant classification) (AUC, 0.89; 95% CI: 0.85, 0.92). The AUC was 2% higher for cascade (median percent difference obtained by using paired bootstrapped samples) and was significant (P = .0027). Our proposed two-stage cascade classifier decreases the overall misclassification rate by 12%, with 72 of 409 missed diagnoses with cascade versus 82 of 409 missed diagnoses with one-shot classifier. Conclusion Separately optimizing feature selection and training classifiers

  4. Reliability and Accuracy of Surgical Resident Peer Ratings.

    ERIC Educational Resources Information Center

    Lutsky, Larry A.; And Others

    1993-01-01

    Reliability and accuracy of peer ratings by 32, 28, 33 general surgery residents over 3 years were examined. Peer ratings were found highly reliable, with high level of test-retest reliability replicated across three years. Halo effects appear to pose greatest threat to rater accuracy, though chief residents tended to exhibit less halo effect than…

  5. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  6. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  7. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  8. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  9. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  10. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  11. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  12. EFFECTS OF LANDSCAPE CHARACTERISTICS ON LAND-COVER CLASS ACCURACY

    EPA Science Inventory



    Utilizing land-cover data gathered as part of the National Land-Cover Data (NLCD) set accuracy assessment, several logistic regression models were formulated to analyze the effects of patch size and land-cover heterogeneity on classification accuracy. Specific land-cover ...

  13. Prediction of Rate Constants for Catalytic Reactions with Chemical Accuracy.

    PubMed

    Catlow, C Richard A

    2016-08-01

    Ex machina: A computational method for predicting rate constants for reactions within microporous zeolite catalysts with chemical accuracy has recently been reported. A key feature of this method is a stepwise QM/MM approach that allows accuracy to be achieved while using realistic models with accessible computer resources. PMID:27329206

  14. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  15. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  16. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  17. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  18. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  19. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 13 2011-04-01 2011-04-01 false Accuracy-related penalty. 1.6662-2 Section 1.6662-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6662-2 Accuracy-related penalty. (a)...

  20. 29 CFR 501.8 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Accuracy of information, statements, data. 501.8 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER SECTION 218 OF THE IMMIGRATION AND NATIONALITY ACT General Provisions § 501.8 Accuracy of...

  1. 29 CFR 502.7 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Accuracy of information, statements, data. 502.7 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER... Accuracy of information, statements, data. Information, statements and data submitted in compliance...

  2. 29 CFR 502.7 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Accuracy of information, statements, data. 502.7 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER... Accuracy of information, statements, data. Information, statements and data submitted in compliance...

  3. 29 CFR 501.8 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Accuracy of information, statements, data. 501.8 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER SECTION 218 OF THE IMMIGRATION AND NATIONALITY ACT General Provisions § 501.8 Accuracy of...

  4. Models of Accuracy in Repeated-Measures Designs

    ERIC Educational Resources Information Center

    Dixon, Peter

    2008-01-01

    Accuracy is often analyzed using analysis of variance techniques in which the data are assumed to be normally distributed. However, accuracy data are discrete rather than continuous, and proportion correct are constrained to the range 0-1. Monte Carlo simulations are presented illustrating how this can lead to distortions in the pattern of means.…

  5. 40 CFR 86.338-79 - Exhaust measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Exhaust measurement accuracy. 86.338... Regulations for New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.338-79 Exhaust measurement accuracy. (a) The analyzers must be operated between 15 percent and...

  6. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643 Performance accuracy standard. (a) General. Performance...

  7. The Effect of Knowledge and Strategy Training on Monitoring Accuracy.

    ERIC Educational Resources Information Center

    Nietfeld, John L.; Schraw, Gregory

    2002-01-01

    Investigated the effect of prior knowledge and strategy training on monitoring accuracy among college students, comparing debilitative, no-impact, and facilitative hypotheses. Overall, knowledge acquired through brief strategy training improved performance, confidence, and monitoring accuracy independent of general ability and general mathematics…

  8. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  9. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  10. 10 CFR 54.13 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information....

  11. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  12. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  13. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  14. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  15. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  16. 40 CFR 86.338-79 - Exhaust measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Exhaust measurement accuracy. 86.338....338-79 Exhaust measurement accuracy. (a) The analyzers must be operated between 15 percent and 100 percent of full-scale chart deflection during the measurement of the emissions for each mode....

  17. 40 CFR 86.1338-84 - Emission measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-84 Emission measurement accuracy. (a) Measurement accuracy—Bag sampling. (1) Good... using the calibration data obtained with both calibration gases. (b) Measurement...

  18. 40 CFR 86.1338-84 - Emission measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-84 Emission measurement accuracy. (a) Measurement accuracy—Bag sampling. (1) Good... using the calibration data obtained with both calibration gases. (b) Measurement...

  19. 40 CFR 86.338-79 - Exhaust measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Exhaust measurement accuracy. 86.338....338-79 Exhaust measurement accuracy. (a) The analyzers must be operated between 15 percent and 100 percent of full-scale chart deflection during the measurement of the emissions for each mode....

  20. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Measurement, accuracy, and reliability... Monitors § 74.8 Measurement, accuracy, and reliability requirements. (a) Breathing zone...

  1. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Measurement, accuracy, and reliability... Monitors § 74.8 Measurement, accuracy, and reliability requirements. (a) Breathing zone...

  2. Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking

    ERIC Educational Resources Information Center

    Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas

    2012-01-01

    This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…

  3. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  4. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  5. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  6. 29 CFR 500.77 - Accuracy of information furnished.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Accuracy of information furnished. 500.77 Section 500.77 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS MIGRANT AND SEASONAL AGRICULTURAL WORKER PROTECTION Worker Protections Employment Information Furnished § 500.77 Accuracy of information...

  7. 40 CFR 86.1338-84 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-84 Emission measurement accuracy. (a) Measurement accuracy—Bag sampling. (1) Good... using the calibration data obtained with both calibration gases. (b) Measurement...

  8. Dissociating Appraisals of Accuracy and Recollection in Autobiographical Remembering

    ERIC Educational Resources Information Center

    Scoboria, Alan; Pascal, Lisa

    2016-01-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally…

  9. A Probability Model of Accuracy in Deception Detection Experiments.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Levine, Timothy R.

    2001-01-01

    Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…

  10. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  11. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  12. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  13. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  14. 31 CFR 10.22 - Diligence as to accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... to any matter administered by the Internal Revenue Service. (b) Reliance on others. Except...

  15. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  16. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  17. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  18. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  19. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  20. 10 CFR 60.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a)...

  1. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  2. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  3. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  4. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  5. 10 CFR 76.9 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 76.9 Section 76.9 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.9 Completeness and accuracy of information. (a) Information provided to the Commission...

  6. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  7. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  8. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  9. 10 CFR 52.6 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 52.6 Section 52.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS General Provisions § 52.6 Completeness and accuracy of information. (a)...

  10. DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES

    EPA Science Inventory

    Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...

  11. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  12. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  13. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  14. Accuracy testing of electric groundwater-level measurement tapes

    USGS Publications Warehouse

    Jelinski, Jim; Clayton, Christopher S.; Fulford, Janice M.

    2015-01-01

    The accuracy tests demonstrated that none of the electric-tape models tested consistently met the suggested USGS accuracy of ±0.01 ft. The test data show that the tape models in the study should give a water-level measurement that is accurate to roughly ±0.05 ft per 100 ft without additional calibration. To meet USGS accuracy guidelines, the electric-tape models tested will need to be individually calibrated. Specific conductance also plays a part in tape accuracy. The probes will not work in water with specific conductance values near zero, and the accuracy of one probe was unreliable in very high conductivity water (10,000 microsiemens per centimeter).

  15. Accuracy of endoscopic ultrasonography for diagnosing ulcerative early gastric cancers.

    PubMed

    Park, Jin-Seok; Kim, Hyungkil; Bang, Byongwook; Kwon, Kyesook; Shin, Youngwoon

    2016-07-01

    Although endoscopic ultrasonography (EUS) is the first-choice imaging modality for predicting the invasion depth of early gastric cancer (EGC), the prediction accuracy of EUS is significantly decreased when EGC is combined with ulceration.The aim of present study was to compare the accuracy of EUS and conventional endoscopy (CE) for determining the depth of EGC. In addition, the various clinic-pathologic factors affecting the diagnostic accuracy of EUS, with a particular focus on endoscopic ulcer shapes, were evaluated.We retrospectively reviewed data from 236 consecutive patients with ulcerative EGC. All patients underwent EUS for estimating tumor invasion depth, followed by either curative surgery or endoscopic treatment. The diagnostic accuracy of EUS and CE was evaluated by comparing the final histologic result of resected specimen. The correlation between accuracy of EUS and characteristics of EGC (tumor size, histology, location in stomach, tumor invasion depth, and endoscopic ulcer shapes) was analyzed. Endoscopic ulcer shapes were classified into 3 groups: definite ulcer, superficial ulcer, and ill-defined ulcer.The overall accuracy of EUS and CE for predicting the invasion depth in ulcerative EGC was 68.6% and 55.5%, respectively. Of the 236 patients, 36 patients were classified as definite ulcers, 98 were superficial ulcers, and 102 were ill-defined ulcers, In univariate analysis, EUS accuracy was associated with invasion depth (P = 0.023), tumor size (P = 0.034), and endoscopic ulcer shapes (P = 0.001). In multivariate analysis, there is a significant association between superficial ulcer in CE and EUS accuracy (odds ratio: 2.977; 95% confidence interval: 1.255-7.064; P = 0.013).The accuracy of EUS for determining tumor invasion depth in ulcerative EGC was superior to that of CE. In addition, ulcer shape was an important factor that affected EUS accuracy. PMID:27472672

  16. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    PubMed Central

    Yilmaz, Nihat; Inan, Onur

    2013-01-01

    This paper offers a hybrid approach that uses the artificial bee colony (ABC) algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications. PMID:23983632

  17. Gesture recognition for smart home applications using portable radar sensors.

    PubMed

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes. PMID:25571464

  18. Dissociating appraisals of accuracy and recollection in autobiographical remembering.

    PubMed

    Scoboria, Alan; Pascal, Lisa

    2016-07-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally distinct, such high correspondence might suggest conceptual redundancy. This article examines whether recollection and accuracy dissociate when studying different types of autobiographical event representations. In Study 1, 278 participants described a believed memory, a nonbelieved memory, and a believed-not-remembered event and rated each on occurrence, recollection, accuracy, and related covariates. In Study 2, 876 individuals described and rated 1 of these events, as well as an event about which they were uncertain about their memory. Confirmatory structural equation modeling indicated that the measurement dissociation between occurrence, recollection and accuracy held across all types of events examined. Relative to believed memories, the relationship between recollection and belief in accuracy was meaningfully lower for the other event types. These findings support the claim that recollection and accuracy arise from distinct underlying mechanisms. (PsycINFO Database Record PMID:26866659

  19. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    PubMed Central

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  20. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  1. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  2. The efficacy of bedside chest ultrasound: from accuracy to outcomes.

    PubMed

    Hew, Mark; Tay, Tunn Ren

    2016-09-01

    For many respiratory physicians, point-of-care chest ultrasound is now an integral part of clinical practice. The diagnostic accuracy of ultrasound to detect abnormalities of the pleura, the lung parenchyma and the thoracic musculoskeletal system is well described. However, the efficacy of a test extends beyond just diagnostic accuracy. The true value of a test depends on the degree to which diagnostic accuracy efficacy influences decision-making efficacy, and the subsequent extent to which this impacts health outcome efficacy. We therefore reviewed the demonstrable levels of test efficacy for bedside ultrasound of the pleura, lung parenchyma and thoracic musculoskeletal system.For bedside ultrasound of the pleura, there is evidence supporting diagnostic accuracy efficacy, decision-making efficacy and health outcome efficacy, predominantly in guiding pleural interventions. For the lung parenchyma, chest ultrasound has an impact on diagnostic accuracy and decision-making for patients presenting with acute respiratory failure or breathlessness, but there are no data as yet on actual health outcomes. For ultrasound of the thoracic musculoskeletal system, there is robust evidence only for diagnostic accuracy efficacy.We therefore outline avenues to further validate bedside chest ultrasound beyond diagnostic accuracy, with an emphasis on confirming enhanced health outcomes. PMID:27581823

  3. Art and accuracy: the drawing ability of idiot-savants.

    PubMed

    Hermelin, B; O'Connor, N

    1990-01-01

    The accuracy and the artistic merit of drawings produced by graphically gifted idiot-savants and by artistically able normal children were investigated in various conditions. Drawings had to be executed when a three- or two-dimensional model of the scene to be drawn was in view, or when it had to be remembered or drawn from another viewpoint. It was found that overall accuracy was better for the normal than for the mentally handicapped subjects. In contrast, ratings for artistic merit did not differentiate the groups. It is concluded that while the accuracy of drawings may be related to intelligence, the artistic quality of the graphic production is not. PMID:2312650

  4. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  5. Position determination accuracy from the microwave landing system

    NASA Technical Reports Server (NTRS)

    Cicolani, L. S.

    1973-01-01

    Analysis and results are given for the position determination accuracy obtainable from the microwave landing guidance system. Siting arrangements, coverage volumes, and accuracy standards for the azimuth, elevation, and range functions of the microwave system are discussed. Results are given for the complete coverage of the systems and are related to flight operational requirements for position estimation during flare, glide slope, and general terminal area approaches. Range rate estimation from range data is also analyzed. The distance measuring equipment accuracy required to meet the range rate estimation standards is determined, and a method of optimizing the range rate estimate is also given.

  6. Air traffic control surveillance accuracy and update rate study

    NASA Technical Reports Server (NTRS)

    Craigie, J. H.; Morrison, D. D.; Zipper, I.

    1973-01-01

    The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.

  7. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  8. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures

    PubMed Central

    Lyle, Matthew J.; Pickard, Chris J.; Needs, Richard J.

    2015-01-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  9. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures.

    PubMed

    Lyle, Matthew J; Pickard, Chris J; Needs, Richard J

    2015-06-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  10. The H50Q mutation induces a 10-fold decrease in the solubility of α-synuclein.

    PubMed

    Porcari, Riccardo; Proukakis, Christos; Waudby, Christopher A; Bolognesi, Benedetta; Mangione, P Patrizia; Paton, Jack F S; Mullin, Stephen; Cabrita, Lisa D; Penco, Amanda; Relini, Annalisa; Verona, Guglielmo; Vendruscolo, Michele; Stoppini, Monica; Tartaglia, Gian Gaetano; Camilloni, Carlo; Christodoulou, John; Schapira, Anthony H V; Bellotti, Vittorio

    2015-01-23

    The conversion of α-synuclein from its intrinsically disordered monomeric state into the fibrillar cross-β aggregates characteristically present in Lewy bodies is largely unknown. The investigation of α-synuclein variants causative of familial forms of Parkinson disease can provide unique insights into the conditions that promote or inhibit aggregate formation. It has been shown recently that a newly identified pathogenic mutation of α-synuclein, H50Q, aggregates faster than the wild-type. We investigate here its aggregation propensity by using a sequence-based prediction algorithm, NMR chemical shift analysis of secondary structure populations in the monomeric state, and determination of thermodynamic stability of the fibrils. Our data show that the H50Q mutation induces only a small increment in polyproline II structure around the site of the mutation and a slight increase in the overall aggregation propensity. We also find, however, that the H50Q mutation strongly stabilizes α-synuclein fibrils by 5.0 ± 1.0 kJ mol(-1), thus increasing the supersaturation of monomeric α-synuclein within the cell, and strongly favors its aggregation process. We further show that wild-type α-synuclein can decelerate the aggregation kinetics of the H50Q variant in a dose-dependent manner when coaggregating with it. These last findings suggest that the precise balance of α-synuclein synthesized from the wild-type and mutant alleles may influence the natural history and heterogeneous clinical phenotype of Parkinson disease. PMID:25505181

  11. A Monte Carlo Comparison of Measures of Relative and Absolute Monitoring Accuracy

    ERIC Educational Resources Information Center

    Nietfeld, John L.; Enders, Craig K; Schraw, Gregory

    2006-01-01

    Researchers studying monitoring accuracy currently use two different indexes to estimate accuracy: relative accuracy and absolute accuracy. The authors compared the distributional properties of two measures of monitoring accuracy using Monte Carlo procedures that fit within these categories. They manipulated the accuracy of judgments (i.e., chance…

  12. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  13. Hydraulic servo system increases accuracy in fatigue testing

    NASA Technical Reports Server (NTRS)

    Dixon, G. V.; Kibler, K. S.

    1967-01-01

    Hydraulic servo system increases accuracy in applying fatigue loading to a specimen under test. An error sensing electronic control loop, coupled to the hydraulic proportional closed loop cyclic force generator, provides an accurately controlled peak force to the specimen.

  14. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  15. Parenting and adolescents' accuracy in perceiving parental values.

    PubMed

    Knafo, Ariel; Schwartz, Shalom H

    2003-01-01

    What determines adolescents' accuracy in perceiving parental values? The current study examined potential predictors including parental value communication, family value agreement, and parenting styles. In the study, 547 Israeli adolescents (aged 16 to 18) of diverse socioeconomic backgrounds participated with their parents. Adolescents reported the values they perceive their parents want them to hold. Parents reported their socialization values. Accuracy in perceiving parents' overall value system correlated positively with parents' actual and perceived value agreement and perceived parental warmth and responsiveness, but negatively with perceived value conflict, indifferent parenting, and autocratic parenting in all gender compositions of parent-child dyads. Other associations varied by dyad type. Findings were similar for predicting accuracy in perceiving two specific values: tradition and hedonism. The article discusses implications for the processes that underlie accurate perception, gender differences, and other potential influences on accuracy in value perception. PMID:12705575

  16. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  17. Vibrational Spectroscopy of HD{sup +} with 2-ppb Accuracy

    SciTech Connect

    Koelemeij, J. C. J.; Roth, B.; Wicht, A.; Ernsting, I.; Schiller, S.

    2007-04-27

    By measurement of the frequency of a vibrational overtone transition in the molecular hydrogen ion HD{sup +}, we demonstrate the first optical spectroscopy of trapped molecular ions with submegahertz accuracy. We use a diode laser, locked to a stable frequency comb, to perform resonance-enhanced multiphoton dissociation spectroscopy on sympathetically cooled HD{sup +} ions at 50 mK. The achieved 2-ppb relative accuracy is a factor of 150 higher than previous results for HD{sup +}, and the measured transition frequency agrees well with recent high-accuracy ab initio calculations, which include high-order quantum electrodynamic effects. We also show that our method bears potential for achieving considerably higher accuracy and may, if combined with slightly improved theoretical calculations, lead to a new and improved determination of the electron-proton mass ratio.

  18. The Geometric Accuracy Validation of the ZY-3 Mapping Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Tang, X.; Zhang, G.; Zhu, X.

    2013-05-01

    ZiYuan-3 (ZY-3) mapping satellite is the first civilian high-resolution stereo mapping satellite of China. The satellite's objective is oriented towards plotting 1:50,000 and 1:25,000 topographic maps. This article proposes ZY-3 mapping satellite Rigorous Image Geometry Model and Rational Function Model (RFM). In addition, this paper utilizes the image of the ZY-3 satellite with the region of flatlands, hills and mountains for the block adjustment experiment. Different ground control points are selected and the accuracy is validated by check points, and the some Digital Surface Model (DSM), Digital Orthophoto Map (DOM) are generated and the accuracy is also validated by check points. The experiment reveals that the planar accuracy of DOM and vertical accuracy of DSM are better than 3m and 2 m, respectively. The experiment demonstrates the effectiveness of ZY-3 mapping satellite image geometry model.

  19. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  20. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  1. Portable, high intensity isotopic neutron source provides increased experimental accuracy

    NASA Technical Reports Server (NTRS)

    Mohr, W. C.; Stewart, D. C.; Wahlgren, M. A.

    1968-01-01

    Small portable, high intensity isotopic neutron source combines twelve curium-americium beryllium sources. This high intensity of neutrons, with a flux which slowly decreases at a known rate, provides for increased experimental accuracy.

  2. On the accuracy of ERS-1 orbit predictions

    NASA Technical Reports Server (NTRS)

    Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.

    1993-01-01

    Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.

  3. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  4. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  5. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  6. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  7. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  8. Numerical planetary and lunar ephemerides - Present status, precision and accuracies

    NASA Technical Reports Server (NTRS)

    Standish, E. Myles, Jr.

    1986-01-01

    Features of the emphemeris creation process are described with attention given to the equations of motion, the numerical integration, and the least-squares fitting process. Observational data are presented and ephemeride accuracies are estimated. It is believed that radio measurements, VLBI, occultations, and the Space Telescope and Hipparcos will improve ephemerides in the near future. Limitations to accuracy are considered as well as relativity features. The export procedure, by which an outside user may obtain and use the JPL ephemerides, is discussed.

  9. Determining factors for the accuracy of DMRG in chemistry.

    PubMed

    Keller, Sebastian F; Reiher, Markus

    2014-01-01

    The Density Matrix Renormalization Group (DMRG) algorithm has been a rising star for the accurate ab initio exploration of Born-Oppenheimer potential energy surfaces in theoretical chemistry. However, owing to its iterative numerical nature, pitfalls that can affect the accuracy of DMRG energies need to be circumvented. Here, after a brief introduction into this quantum chemical method, we discuss criteria that determine the accuracy of DMRG calculations. PMID:24983596

  10. Noise limitations on monopulse accuracy in a multibeam antenna

    NASA Astrophysics Data System (ADS)

    Loraine, J.; Wallington, J. R.

    A multibeam system allowing target tracking using monopulse processing switched from beamset to beamset is considered. Attention is given to the accuracy of target angular position estimation. An analytical method is used to establish performance limits under low SNR conditions for a multibeam system. It is shown that, in order to achieve accuracies comparable to those of conventional monopulse systems, much higher SNRs are needed.

  11. A Stable and Conservative Interface Treatment of Arbitrary Spatial Accuracy

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David

    1998-01-01

    Stable and accurate interface conditions are derived for the linear advection-diffusion equation. The conditions are functionally independent of the spatial order of accuracy and rely only on the form of the discrete operator. We focus on high-order finite-difference operators that satisfy the summation-by-parts (SBP) property. We prove that stability is a natural consequence of the SBP operators used in conjunction with the new boundary conditions. In addition, we show that the interface treatments are conservative. New finite-difference operators of spatial accuracy up to sixth order are constructed: these operators satisfy the SBP property. Finite-difference operators are shown to admit design accuracy (p(sup th)-order global accuracy) when (p - 1)(sup th)-order stencil closures are used near the boundaries if the physical boundary conditions are implemented to at least p(sup th)-order accuracy. Stability and accuracy are demonstrated on the nonlinear Burgers' equation for an twelve-subdomain problem with randomly distributed interfaces.

  12. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    PubMed

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  13. Numerical accuracy of mean-field calculations in coordinate space

    NASA Astrophysics Data System (ADS)

    Ryssens, W.; Heenen, P.-H.; Bender, M.

    2015-12-01

    Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.

  14. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487

  15. Collective animal decisions: preference conflict and decision accuracy

    PubMed Central

    Conradt, Larissa

    2013-01-01

    Social animals frequently share decisions that involve uncertainty and conflict. It has been suggested that conflict can enhance decision accuracy. In order to judge the practical relevance of such a suggestion, it is necessary to explore how general such findings are. Using a model, I examine whether conflicts between animals in a group with respect to preferences for avoiding false positives versus avoiding false negatives could, in principle, enhance the accuracy of collective decisions. I found that decision accuracy nearly always peaked when there was maximum conflict in groups in which individuals had different preferences. However, groups with no preferences were usually even more accurate. Furthermore, a relatively slight skew towards more animals with a preference for avoiding false negatives decreased the rate of expected false negatives versus false positives considerably (and vice versa), while resulting in only a small loss of decision accuracy. I conclude that in ecological situations in which decision accuracy is crucial for fitness and survival, animals cannot ‘afford’ preferences with respect to avoiding false positives versus false negatives. When decision accuracy is less crucial, animals might have such preferences. A slight skew in the number of animals with different preferences will result in the group avoiding that type of error more that the majority of group members prefers to avoid. The model also indicated that knowing the average success rate (‘base rate’) of a decision option can be very misleading, and that animals should ignore such base rates unless further information is available. PMID:24516716

  16. Knowing your own heart: distinguishing interoceptive accuracy from interoceptive awareness.

    PubMed

    Garfinkel, Sarah N; Seth, Anil K; Barrett, Adam B; Suzuki, Keisuke; Critchley, Hugo D

    2015-01-01

    Interoception refers to the sensing of internal bodily changes. Interoception interacts with cognition and emotion, making measurement of individual differences in interoceptive ability broadly relevant to neuropsychology. However, inconsistency in how interoception is defined and quantified led to a three-dimensional model. Here, we provide empirical support for dissociation between dimensions of: (1) interoceptive accuracy (performance on objective behavioural tests of heartbeat detection), (2) interoceptive sensibility (self-evaluated assessment of subjective interoception, gauged using interviews/questionnaires) and (3) interoceptive awareness (metacognitive awareness of interoceptive accuracy, e.g. confidence-accuracy correspondence). In a normative sample (N=80), all three dimensions were distinct and dissociable. Interoceptive accuracy was only partly predicted by interoceptive awareness and interoceptive sensibility. Significant correspondence between dimensions emerged only within the sub-group of individuals with greatest interoceptive accuracy. These findings set the context for defining how the relative balance of accuracy, sensibility and awareness dimensions explain cognitive, emotional and clinical associations of interoceptive ability. PMID:25451381

  17. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data

    PubMed Central

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L. J.; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two ‘one-process’ models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a ‘two-process’ model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  18. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  19. Accuracy of GIPSY PPP from a denser network

    NASA Astrophysics Data System (ADS)

    Gokhan Hayal, Adem; Ugur Sanli, Dogan

    2015-04-01

    Researchers need to know about the accuracy of GPS for the planning of their field survey and hence to obtain reliable positions as well as deformation rates. Geophysical applications such as monitoring of development of a fault creep or of crustal motion for global sea level rise studies necessitate the use of continuous GPS whereas applications such as determining co-seismic displacements where permanent GPS sites are sparsely scattered require the employment of episodic campaigns. Recently, real time applications of GPS in relation to the early prediction of earthquakes and tsunamis are in concern. Studying the static positioning accuracy of GPS has been of interest to researchers for more than a decade now. Various software packages and modeling strategies have been tested so far. Relative positioning accuracy was compared with PPP accuracy. For relative positioning, observing session duration and network geometry of reference stations appear to be the dominant factors on GPS accuracy whereas observing session duration seems to be the only factor influencing the PPP accuracy. We believe that latest developments concerning the accuracy of static GPS from well-established software will form a basis for the quality of GPS field works mentioned above especially for real time applications which are referred to more frequently nowadays. To assess the GPS accuracy, conventionally some 10 to 30 regionally or globally scattered networks of GPS stations are used. In this study, we enlarge the size of GPS network up to 70 globally scattered IGS stations to observe the changes on our previous accuracy modeling which employed only 13 stations. We use the latest version 6.3 of GIPSY/OASIS II software and download the data from SOPAC archives. Noting the effect of the ionosphere on our previous accuracy modeling, here we selected the GPS days through which the k-index values are lower than 4. This enabled us to extend the interval of observing session duration used for the

  20. Accuracy of genomic predictions of residual feed intake and 250-day body weight in growing heifers using 625,000 single nucleotide polymorphism markers.

    PubMed

    Pryce, J E; Arias, J; Bowman, P J; Davis, S R; Macdonald, K A; Waghorn, G C; Wales, W J; Williams, Y J; Spelman, R J; Hayes, B J

    2012-04-01

    Feed makes up a large proportion of variable costs in dairying. For this reason, selection for traits associated with feed conversion efficiency should lead to greater profitability of dairying. Residual feed intake (RFI) is the difference between actual and predicted feed intakes and is a useful selection criterion for greater feed efficiency. However, measuring individual feed intakes on a large scale is prohibitively expensive. A panel of DNA markers explaining genetic variation in this trait would enable cost-effective genomic selection for this trait. With the aim of enabling genomic selection for RFI, we used data from almost 2,000 heifers measured for growth rate and feed intake in Australia (AU) and New Zealand (NZ) genotyped for 625,000 single nucleotide polymorphism (SNP) markers. Substantial variation in RFI and 250-d body weight (BW250) was demonstrated. Heritabilities of RFI and BW250 estimated using genomic relationships among the heifers were 0.22 and 0.28 in AU heifers and 0.38 and 0.44 in NZ heifers, respectively. Genomic breeding values for RFI and BW250 were derived using genomic BLUP and 2 bayesian methods (BayesA, BayesMulti). The accuracies of genomic breeding values for RFI were evaluated using cross-validation. When 624,930 SNP were used to derive the prediction equation, the accuracies averaged 0.37 and 0.31 for RFI in AU and NZ validation data sets, respectively, and 0.40 and 0.25 for BW250 in AU and NZ, respectively. The greatest advantage of using the full 624,930 SNP over a reduced panel of 36,673 SNP (the widely used BovineSNP50 array) was when the reference population included only animals from either the AU or the NZ experiment. Finally, the bayesian methods were also used for quantitative trait loci detection. On chromosome 14 at around 25 Mb, several SNP closest to PLAG1 (a gene believed to affect stature in humans and cattle) had an effect on BW250 in both AU and NZ populations. In addition, 8 SNP with large effects on RFI were