Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.
ERIC Educational Resources Information Center
Rowell, R. Kevin
In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
Cross-validation pitfalls when selecting and assessing regression and classification models.
Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon
2014-03-29
We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.
Methods to compute reliabilities for genomic predictions of feed intake
USDA-ARS?s Scientific Manuscript database
For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
Comparative assessment of three standardized robotic surgery training methods.
Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C
2013-10-01
To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.
Prediction of adult height in girls: the Beunen-Malina-Freitas method.
Beunen, Gaston P; Malina, Robert M; Freitas, Duarte L; Thomis, Martine A; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Maes, Hermine H; Lefevre, Johan
2011-12-01
The purpose of this study was to validate and cross-validate the Beunen-Malina-Freitas method for non-invasive prediction of adult height in girls. A sample of 420 girls aged 10-15 years from the Madeira Growth Study were measured at yearly intervals and then 8 years later. Anthropometric dimensions (lengths, breadths, circumferences, and skinfolds) were measured; skeletal age was assessed using the Tanner-Whitehouse 3 method and menarcheal status (present or absent) was recorded. Adult height was measured and predicted using stepwise, forward, and maximum R (2) regression techniques. Multiple correlations, mean differences, standard errors of prediction, and error boundaries were calculated. A sample of the Leuven Longitudinal Twin Study was used to cross-validate the regressions. Age-specific coefficients of determination (R (2)) between predicted and measured adult height varied between 0.57 and 0.96, while standard errors of prediction varied between 1.1 and 3.9 cm. The cross-validation confirmed the validity of the Beunen-Malina-Freitas method in girls aged 12-15 years, but at lower ages the cross-validation was less consistent. We conclude that the Beunen-Malina-Freitas method is valid for the prediction of adult height in girls aged 12-15 years. It is applicable to European populations or populations of European ancestry.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
An empirical assessment of validation practices for molecular classifiers
Castaldi, Peter J.; Dahabreh, Issa J.
2011-01-01
Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697
Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong
ERIC Educational Resources Information Center
Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger
2016-01-01
Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…
Kaneko, Hiromasa; Funatsu, Kimito
2013-09-23
We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.
Learning to recognize rat social behavior: Novel dataset and cross-dataset application.
Lorbach, Malte; Kyriakou, Elisavet I; Poppe, Ronald; van Dam, Elsbeth A; Noldus, Lucas P J J; Veltkamp, Remco C
2018-04-15
Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings. Copyright © 2017 Elsevier B.V. All rights reserved.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Piette, Elizabeth R; Moore, Jason H
2018-01-01
Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.
Statistical validation of normal tissue complication probability models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis
2012-09-01
To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.
Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath
2017-01-01
The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth
ERIC Educational Resources Information Center
Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás
2015-01-01
Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…
ERIC Educational Resources Information Center
Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla K.; Vaughn, Sharon; Tolar, Tammy D.
2014-01-01
Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Cognitive assessment…
Certification in Structural Health Monitoring Systems
2011-09-01
validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE
Alternative methods to evaluate trial level surrogacy.
Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert
2008-01-01
The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.
Cross validation issues in multiobjective clustering
Brusco, Michael J.; Steinley, Douglas
2018-01-01
The implementation of multiobjective programming methods in combinatorial data analysis is an emergent area of study with a variety of pragmatic applications in the behavioural sciences. Most notably, multiobjective programming provides a tool for analysts to model trade offs among competing criteria in clustering, seriation, and unidimensional scaling tasks. Although multiobjective programming has considerable promise, the technique can produce numerically appealing results that lack empirical validity. With this issue in mind, the purpose of this paper is to briefly review viable areas of application for multiobjective programming and, more importantly, to outline the importance of cross-validation when using this method in cluster analysis. PMID:19055857
Measuring Adolescent Social and Academic Self-Efficacy: Cross-Ethnic Validity of the SEQ-C
ERIC Educational Resources Information Center
Minter, Anthony; Pritzker, Suzanne
2017-01-01
Objective: This study examines the psychometric strength, including cross-ethnic validity, of two subscales of Muris' Self-Efficacy Questionnaire for Children: Academic Self-Efficacy (ASE) and Social Self-Efficacy (SSE). Methods: A large ethnically diverse sample of 3,358 early and late adolescents completed surveys including the ASE and SSE.…
Validation of annual growth rings in freshwater mussel shells using cross dating .Can
Andrew L. Rypel; Wendell R. Haag; Robert H. Findlay
2009-01-01
We examined the usefulness of dendrochronological cross-dating methods for studying long-term, interannual growth patterns in freshwater mussels, including validation of annual shell ring formation. Using 13 species from three rivers, we measured increment widths between putative annual rings on shell thin sections and then removed age-related variation by...
Multifractal detrended cross-correlation analysis for two nonstationary signals.
Zhou, Wei-Xing
2008-06-01
We propose a method called multifractal detrended cross-correlation analysis to investigate the multifractal behaviors in the power-law cross-correlations between two time series or higher-dimensional quantities recorded simultaneously, which can be applied to diverse complex systems such as turbulence, finance, ecology, physiology, geophysics, and so on. The method is validated with cross-correlated one- and two-dimensional binomial measures and multifractal random walks. As an example, we illustrate the method by analyzing two financial time series.
Methodology and issues of integral experiments selection for nuclear data validation
NASA Astrophysics Data System (ADS)
Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian
2017-09-01
Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).
A statistical method (cross-validation) for bone loss region detection after spaceflight
Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.
2010-01-01
Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144
Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs
ERIC Educational Resources Information Center
Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.
2005-01-01
The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…
Initial Reliability and Validity of the Perceived Social Competence Scale
ERIC Educational Resources Information Center
Anderson-Butcher, Dawn; Iachini, Aidyn L.; Amorose, Anthony J.
2008-01-01
Objective: This study describes the development and validation of a perceived social competence scale that social workers can easily use to assess children's and youth's social competence. Method: Exploratory and confirmatory factor analyses were conducted on a calibration and a cross-validation sample of youth. Predictive validity was also…
Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393
Korjus, Kristjan; Hebart, Martin N; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.
Cross-validation of the Beunen-Malina method to predict adult height.
Beunen, Gaston P; Malina, Robert M; Freitas, Duarte I; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Lefevre, Johan
2010-08-01
The purpose of this study was to cross-validate the Beunen-Malina method for non-invasive prediction of adult height. Three hundred and eight boys aged 13, 14, 15 and 16 years from the Madeira Growth Study were observed at annual intervals in 1996, 1997 and 1998 and re-measured 7-8 years later. Height, sitting height and the triceps and subscapular skinfolds were measured; skeletal age was assessed using the Tanner-Whitehouse 2 method. Adult height was measured and predicted using the Beunen-Malina method. Maturity groups were classified using relative skeletal age (skeletal age minus chronological age). Pearson correlations, mean differences and standard errors of estimate (SEE) were calculated. Age-specific correlations between predicted and measured adult height vary between 0.70 and 0.85, while age-specific SEE varies between 3.3 and 4.7 cm. The correlations and SEE are similar to those obtained in the development of the original Beunen-Malina method. The Beunen-Malina method is a valid method to predict adult height in adolescent boys and can be used in European populations or populations from European ancestry. Percentage of predicted adult height is a non-invasive valid method to assess biological maturity.
Kim, SungHwan; Lin, Chien-Wei; Tseng, George C
2016-07-01
Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiaolin; Ye, Li; Wang, Xiaoxiang
2012-12-15
Several recent reports suggested that hydroxylated polybrominated diphenyl ethers (HO-PBDEs) may disturb thyroid hormone homeostasis. To illuminate the structural features for thyroid hormone activity of HO-PBDEs and the binding mode between HO-PBDEs and thyroid hormone receptor (TR), the hormone activity of a series of HO-PBDEs to thyroid receptors β was studied based on the combination of 3D-QSAR, molecular docking, and molecular dynamics (MD) methods. The ligand- and receptor-based 3D-QSAR models were obtained using Comparative Molecular Similarity Index Analysis (CoMSIA) method. The optimum CoMSIA model with region focusing yielded satisfactory statistical results: leave-one-out cross-validation correlation coefficient (q{sup 2}) was 0.571 andmore » non-cross-validation correlation coefficient (r{sup 2}) was 0.951. Furthermore, the results of internal validation such as bootstrapping, leave-many-out cross-validation, and progressive scrambling as well as external validation indicated the rationality and good predictive ability of the best model. In addition, molecular docking elucidated the conformations of compounds and key amino acid residues at the docking pocket, MD simulation further determined the binding process and validated the rationality of docking results. -- Highlights: ► The thyroid hormone activities of HO-PBDEs were studied by 3D-QSAR. ► The binding modes between HO-PBDEs and TRβ were explored. ► 3D-QSAR, molecular docking, and molecular dynamics (MD) methods were performed.« less
Validating silicon polytrodes with paired juxtacellular recordings: method and dataset
Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R.
2016-01-01
Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo “paired-recordings” such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. PMID:27306671
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell
In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.
ERIC Educational Resources Information Center
Zhu, Zheng; Chen, Peijie; Zhuang, Jie
2013-01-01
Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…
Dynamic Time Warping compared to established methods for validation of musculoskeletal models.
Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael
2017-04-11
By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Yinsheng; Li, Zeju; Wu, Guoqing; Yu, Jinhua; Wang, Yuanyuan; Lv, Xiaofei; Ju, Xue; Chen, Zhongping
2018-07-01
Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.
NASA Astrophysics Data System (ADS)
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.
NASA Astrophysics Data System (ADS)
Petersen, D.; Naveed, P.; Ragheb, A.; Niedieker, D.; El-Mashtoly, S. F.; Brechmann, T.; Kötting, C.; Schmiegel, W. H.; Freier, E.; Pox, C.; Gerwert, K.
2017-06-01
Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples.
Validating silicon polytrodes with paired juxtacellular recordings: method and dataset.
Neto, Joana P; Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R
2016-08-01
Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo "paired-recordings" such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. Copyright © 2016 the American Physiological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert
Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less
Heinig, Katja; Miya, Kazuhiro; Kamei, Tomonori; Guerini, Elena; Fraier, Daniela; Yu, Li; Bansal, Surendra; Morcos, Peter N
2016-07-01
Alectinib is a novel anaplastic lymphoma kinase (ALK) inhibitor for treatment of patients with ALK-positive non-small-cell lung cancer who have progressed on or are intolerant to crizotinib. To support clinical development, concentrations of alectinib and metabolite M4 were determined in plasma from patients and healthy subjects. LC-MS/MS methods were developed and validated in two different laboratories: Chugai used separate assays for alectinib and M4 in a pivotal Phase I/II study while Roche established a simultaneous assay for both analytes for another pivotal study and all other studies. Cross-validation assessment revealed a bias between the two bioanalytical laboratories, which was confirmed with the clinical PK data between both pivotal studies using the different bioanalytical methods.
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
[Selection of risk and diagnosis in diabetic polyneuropathy. Validation of method of new systems].
Jurado, Jerónimo; Caula, Jacinto; Pou i Torelló, Josep Maria
2006-06-30
In a previous study we developed a specific algorithm, the polyneuropathy selection method (PSM) with 4 parameters (age, HDL-C, HbA1c, and retinopathy), to select patients at risk of diabetic polyneuropathy (DPN). We also developed a simplified method for DPN diagnosis: outpatient polyneuropathy diagnosis (OPD), with 4 variables (symptoms and 3 objective tests). To confirm the validity of conventional tests for DPN diagnosis; to validate the discriminatory power of the PSM and the diagnostic value of OPD by evaluating their relationship to electrodiagnosis studies and objective clinical neurological assessment; and to evaluate the correlation of DPN and pro-inflammatory status. Cross-sectional, crossed association for PSM validation. Paired samples for OPD validation. Primary care in 3 counties. Random sample of 75 subjects from the type-2 diabetes census for PSM evaluation. Thirty DPN patients and 30 non-DPN patients (from 2 DM2 sub-groups in our earlier study) for OPD evaluation. The gold standard for DPN diagnosis will be studied by means of a clinical neurological study (symptoms, physical examination, and sensitivity tests) and electrodiagnosis studies (sensitivity and motor EMG). Risks of neuropathy, macroangiopathy and pro-inflammatory status (PCR, TNF soluble fraction and total TGF-beta1) will be studied in every subject. Electrodiagnosis studies should confirm the validity of conventional tests for DPN diagnosis. PSM and OPD will be valid methods for selecting patients at risk and diagnosing DPN. There will be a significant relationship between DPN and pro-inflammatory tests.
Xu, Rengyi; Mesaros, Clementina; Weng, Liwei; Snyder, Nathaniel W; Vachani, Anil; Blair, Ian A; Hwang, Wei-Ting
2017-07-01
We compared three statistical methods in selecting a panel of serum lipid biomarkers for mesothelioma and asbestos exposure. Serum samples from mesothelioma, asbestos-exposed subjects and controls (40 per group) were analyzed. Three variable selection methods were considered: top-ranked predictors from univariate model, stepwise and least absolute shrinkage and selection operator. Crossed-validated area under the receiver operating characteristic curve was used to compare the prediction performance. Lipids with high crossed-validated area under the curve were identified. Lipid with mass-to-charge ratio of 372.31 was selected by all three methods comparing mesothelioma versus control. Lipids with mass-to-charge ratio of 1464.80 and 329.21 were selected by two models for asbestos exposure versus control. Different methods selected a similar set of serum lipids. Combining candidate biomarkers can improve prediction.
Estimation of Sensory Analysis Cupping Test Arabica Coffee Using NIR Spectroscopy
NASA Astrophysics Data System (ADS)
Safrizal; Sutrisno; Lilik, P. E. N.; Ahmad, U.; Samsudin
2018-05-01
Flavors have become the most important coffee quality parameters now day, many coffee consuming countries require certain taste scores for the coffee to be ordered, the currently used cupping method of appraisal is the method designed by The Specialty Coffee Association Of America (SCAA), from several previous studies was found that Near-Infrared Spectroscopy (NIRS) can be used to detect chemical composition of certain materials including those associated with flavor so it is possible also to be applied to coffee powder. The aim of this research is to get correlation between NIRS spectrum with cupping scoring by tester, then look at the possibility of testing coffee taste sensors using NIRS spectrum. The coffee samples were taken from various places, altitudes and postharvest handling methods, then the samples were prepared following the SCAA protocol, for sensory analysis was done in two ways, with the expert tester and with the NIRS test. The calibration between both found that Without pretreatment using PLS get RMSE cross validation 6.14, using Multiplicative Scatter Correction spectra obtained RMSE cross validation 5.43, the best RMSE cross-validation was 1.73 achieved by de-trending correction, NIRS can be used to predict the score of cupping.
Mazzotti, M; Bartoli, I; Castellazzi, G; Marzani, A
2014-09-01
The paper aims at validating a recently proposed Semi Analytical Finite Element (SAFE) formulation coupled with a 2.5D Boundary Element Method (2.5D BEM) for the extraction of dispersion data in immersed waveguides of generic cross-section. To this end, three-dimensional vibroacoustic analyses are carried out on two waveguides of square and rectangular cross-section immersed in water using the commercial Finite Element software Abaqus/Explicit. Real wavenumber and attenuation dispersive data are extracted by means of a modified Matrix Pencil Method. It is demonstrated that the results obtained using the two techniques are in very good agreement. Copyright © 2014 Elsevier B.V. All rights reserved.
[Traceability of Wine Varieties Using Near Infrared Spectroscopy Combined with Cyclic Voltammetry].
Li, Meng-hua; Li, Jing-ming; Li, Jun-hui; Zhang, Lu-da; Zhao, Long-lian
2015-06-01
To achieve the traceability of wine varieties, a method was proposed to fuse Near-infrared (NIR) spectra and cyclic voltammograms (CV) which contain different information using D-S evidence theory. NIR spectra and CV curves of three different varieties of wines (cabernet sauvignon, merlot, cabernet gernischt) which come from seven different geographical origins were collected separately. The discriminant models were built using PLS-DA method. Based on this, D-S evidence theory was then applied to achieve the integration of the two kinds of discrimination results. After integrated by D-S evidence theory, the accuracy rate of cross-validation is 95.69% and validation set is 94.12% for wine variety identification. When only considering the wine that come from Yantai, the accuracy rate of cross-validation is 99.46% and validation set is 100%. All the traceability models after fusion achieved better results on classification than individual method. These results suggest that the proposed method combining electrochemical information with spectral information using the D-S evidence combination formula is benefit to the improvement of model discrimination effect, and is a promising tool for discriminating different kinds of wines.
An Evaluation of the Validity and Reliability of a Food Behavior Checklist Modified for Children
ERIC Educational Resources Information Center
Branscum, Paul; Sharma, Manoj; Kaye, Gail; Succop, Paul
2010-01-01
Objective: The objective of this study was to report the construct validity and internal consistency reliability of the Food Behavior Checklist modified for children (FBC-MC), with low-income, Youth Expanded Food and Nutrition Education Program (EFNEP)-eligible children. Methods: Using a cross-sectional research design, construct validity was…
Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.
2016-01-01
Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445
A Cross-Cultural Comparison of Belgian and Vietnamese Children's Social Competence and Behavior
ERIC Educational Resources Information Center
Roskam, Isabelle; Hoang, Thi Vân; Schelstraete, Marie-Anne
2017-01-01
Children's social competence and behavioral adjustment are key issues for child development, education, and clinical research. Cross-cultural analyses are necessary to provide relevant methods of assessing them for cross-cultural research. The aim of the current study was to contribute to this important line of research by validating the 3-factor…
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Cross-cultural adaption and validation of the Persian version of the SWAL-QOL.
Tarameshlu, Maryam; Azimi, Amir Reza; Jalaie, Shohreh; Ghelichi, Leila; Ansari, Noureddin Nakhostin
2017-06-01
The aim of this study was to translate and cross-culturally adapt the swallowing quality-of-life questionnaire (SWAL-QOL) to Persian language and to determine validity and reliability of the Persian version of the swallow quality-of-life questionnaire (PSWAL-QOL) in the patients with oropharyngeal dysphagia.The cross-sectional survey was designed to translate and cross-culturally adapt SWAL-QOL to Persian language following steps recommended in guideline. A total of 142 patients with dysphagia (mean age = 56.7 ± 12.22 years) were selected by non-probability consecutive sampling method to evaluate construct validity and internal consistency. Thirty patients with dysphagia were completed the PSWAL-QOL 2 weeks later for test-retest reliability.The PSWAL-QOL was favorably accepted with no missing items. The floor effect was ranged 0% to 21% and ceiling effect was ranged 0% to 16%. The construct validity was established via exploratory factor analysis. Internal consistency was confirmed with Cronbach α >0.7 for all scales except eating duration (α = 0.68). The test-retest reliability was excellent with intraclass correlation coefficient (ICC) ≥0.75 for all scales.The SWAL-QOL was cross-culturally adapted to Persian and demonstrated to be a valid and reliable self-report questionnaire to measure the impact of dysphagia on the quality-of-life in the Persian patients with oropharyngeal dysphagia.
Cross-cultural adaption and validation of the Persian version of the SWAL-QOL
Tarameshlu, Maryam; Azimi, Amir Reza; Jalaie, Shohreh; Ghelichi, Leila; Ansari, Noureddin Nakhostin
2017-01-01
Abstract The aim of this study was to translate and cross-culturally adapt the swallowing quality-of-life questionnaire (SWAL-QOL) to Persian language and to determine validity and reliability of the Persian version of the swallow quality-of-life questionnaire (PSWAL-QOL) in the patients with oropharyngeal dysphagia. The cross-sectional survey was designed to translate and cross-culturally adapt SWAL-QOL to Persian language following steps recommended in guideline. A total of 142 patients with dysphagia (mean age = 56.7 ± 12.22 years) were selected by non-probability consecutive sampling method to evaluate construct validity and internal consistency. Thirty patients with dysphagia were completed the PSWAL-QOL 2 weeks later for test–retest reliability. The PSWAL-QOL was favorably accepted with no missing items. The floor effect was ranged 0% to 21% and ceiling effect was ranged 0% to 16%. The construct validity was established via exploratory factor analysis. Internal consistency was confirmed with Cronbach α >0.7 for all scales except eating duration (α = 0.68). The test–retest reliability was excellent with intraclass correlation coefficient (ICC) ≥0.75 for all scales. The SWAL-QOL was cross-culturally adapted to Persian and demonstrated to be a valid and reliable self-report questionnaire to measure the impact of dysphagia on the quality-of-life in the Persian patients with oropharyngeal dysphagia. PMID:28658118
Assessing genomic selection prediction accuracy in a dynamic barley breeding
USDA-ARS?s Scientific Manuscript database
Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...
Li, Haiquan; Dai, Xinbin; Zhao, Xuechun
2008-05-01
Membrane transport proteins play a crucial role in the import and export of ions, small molecules or macromolecules across biological membranes. Currently, there are a limited number of published computational tools which enable the systematic discovery and categorization of transporters prior to costly experimental validation. To approach this problem, we utilized a nearest neighbor method which seamlessly integrates homologous search and topological analysis into a machine-learning framework. Our approach satisfactorily distinguished 484 transporter families in the Transporter Classification Database, a curated and representative database for transporters. A five-fold cross-validation on the database achieved a positive classification rate of 72.3% on average. Furthermore, this method successfully detected transporters in seven model and four non-model organisms, ranging from archaean to mammalian species. A preliminary literature-based validation has cross-validated 65.8% of our predictions on the 11 organisms, including 55.9% of our predictions overlapping with 83.6% of the predicted transporters in TransportDB.
Mapping the Diagnosis Axis of an Interface Terminology to the NANDA International Taxonomy
Juvé Udina, Maria-Eulàlia; Gonzalez Samartino, Maribel; Matud Calvo, Cristina
2012-01-01
Background. Nursing terminologies are designed to support nursing practice but, as with any other clinical tool, they should be evaluated. Cross-mapping is a formal method for examining the validity of the existing controlled vocabularies. Objectives. The study aims to assess the inclusiveness and expressiveness of the nursing diagnosis axis of a newly implemented interface terminology by cross-mapping with the NANDA-I taxonomy. Design/Methods. The study applied a descriptive design, using a cross-sectional, bidirectional mapping strategy. The sample included 728 concepts from both vocabularies. Concept cross-mapping was carried out to identify one-to-one, negative, and hierarchical connections. The analysis was conducted using descriptive statistics. Results. Agreement of the raters' mapping achieved 97%. More than 60% of the nursing diagnosis concepts in the NANDA-I taxonomy were mapped to concepts in the diagnosis axis of the new interface terminology; 71.1% were reversely mapped. Conclusions. Main results for outcome measures suggest that the diagnosis axis of this interface terminology meets the validity criterion of cross-mapping when mapped from and to the NANDA-I taxonomy. PMID:22830046
Mapping the Diagnosis Axis of an Interface Terminology to the NANDA International Taxonomy.
Juvé Udina, Maria-Eulàlia; Gonzalez Samartino, Maribel; Matud Calvo, Cristina
2012-01-01
Background. Nursing terminologies are designed to support nursing practice but, as with any other clinical tool, they should be evaluated. Cross-mapping is a formal method for examining the validity of the existing controlled vocabularies. Objectives. The study aims to assess the inclusiveness and expressiveness of the nursing diagnosis axis of a newly implemented interface terminology by cross-mapping with the NANDA-I taxonomy. Design/Methods. The study applied a descriptive design, using a cross-sectional, bidirectional mapping strategy. The sample included 728 concepts from both vocabularies. Concept cross-mapping was carried out to identify one-to-one, negative, and hierarchical connections. The analysis was conducted using descriptive statistics. Results. Agreement of the raters' mapping achieved 97%. More than 60% of the nursing diagnosis concepts in the NANDA-I taxonomy were mapped to concepts in the diagnosis axis of the new interface terminology; 71.1% were reversely mapped. Conclusions. Main results for outcome measures suggest that the diagnosis axis of this interface terminology meets the validity criterion of cross-mapping when mapped from and to the NANDA-I taxonomy.
Cross-cultural equivalence in translations of the oral health impact profile.
MacEntee, Michael I; Brondani, Mario
2016-04-01
The Oral Health Impact Profile (OHIP) has been translated for comparisons across cultural boundaries. This report on a systematic search of literature published between 1994 and 2014 aims to identify an acceptable method of translating psychometric instruments for cross-cultural equivalence, and how they were used to translate the OHIP. An electronic search used the keywords 'cultural adaptation', 'validation', 'Oral Health Impact Profile' and 'OHIP' in MEDLINE and EMBASE databases supplemented by reference links and grey literature. It included papers on methods of cross-cultural translation and translations of the OHIP for dentulous adults and adolescents, and excluded papers without translational details or limited to specific disorders. The search identified eight steps to cross-cultural equivalence, and 36 (plus three supplemental) translations of the OHIP. The steps involve assessment of (i) forward/backward translation by committee, (ii) constructs, (iii) item interpretations, (iv) interval scales, (v) convergent validity, (vi) discriminant validity, (vii) responsiveness to clinical change and (viii) pilot tests. Most (>60%) of the translations involved forward/backward translation by committee, item interpretations, interval scales, convergence, discrimination and pilot tests, but fewer assessed the underlying theory (47%) or responsiveness to clinical change (28%). An acceptable method for translating quality of life-related psychometric instruments for cross-cultural equivalence has eight procedural steps, and most of the 36 OHIP translations involved at least five of the steps. Only translations to Saudi Arabian Arabic, Chinese Mandarin, German and Japanese used all eight steps to claim cultural equivalence with the original OHIP. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies
ERIC Educational Resources Information Center
O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.
2011-01-01
The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-01-01
A comparative study of smart spectrophotometric techniques for the simultaneous determination of Omeprazole (OMP), Tinidazole (TIN) and Doxycycline (DOX) without prior separation steps is developed. These techniques consist of several consecutive steps utilizing zero/or ratio/or derivative spectra. The proposed techniques adopt nine simple different methods, namely direct spectrophotometry, dual wavelength, first derivative-zero crossing, amplitude factor, spectrum subtraction, ratio subtraction, derivative ratio-zero crossing, constant center, and successive derivative ratio method. The calibration graphs are linear over the concentration range of 1-20 μg/mL, 5-40 μg/mL and 2-30 μg/mL for OMP, TIN and DOX, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and successfully applied to commercial pharmaceutical preparation. The methods that are validated according to the ICH guidelines, accuracy, precision, and repeatability, were found to be within the acceptable limits.
Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander
2015-01-01
Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462
Joint multifractal analysis based on wavelet leaders
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Yang, Yan-Hong; Wang, Gang-Jin; Zhou, Wei-Xing
2017-12-01
Mutually interacting components form complex systems and these components usually have long-range cross-correlated outputs. Using wavelet leaders, we propose a method for characterizing the joint multifractal nature of these long-range cross correlations; we call this method joint multifractal analysis based on wavelet leaders (MF-X-WL). We test the validity of the MF-X-WL method by performing extensive numerical experiments on dual binomial measures with multifractal cross correlations and bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MF-X-WL is capable of detecting cross correlations in synthetic data with acceptable estimating errors. We also apply the MF-X-WL method to pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and determine intriguing joint multifractal behavior.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Optimal Combinations of Diagnostic Tests Based on AUC.
Huang, Xin; Qin, Gengsheng; Fang, Yixin
2011-06-01
When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Huang, Hui-Chuan; Shyu, Meei-Ling; Lin, Mei-Feng; Hu, Chaur-Jong; Chang, Chien-Hung; Lee, Hsin-Chien; Chi, Nai-Fang; Chang, Hsiu-Ju
2017-12-01
The objectives of this study were to develop a cross-cultural Chinese version of the Emotional and Social Dysfunction Questionnaire (ESDQ-C) and test its validity and reliability among Chinese-speaking stroke patients. Various methods were used to develop the ESDQ-C. A cross-sectional study was used to examine the validity and reliability of the developed questionnaire, which consists of 28 items belonging to six factors, anger, helplessness, emotional dyscontrol, indifference, inertia and fatigue, and euphoria. Satisfactory convergence and known-group validities were confirmed by significant correlations of the ESDQ-C with the Profile of Mood States-Short Form ( p < .05) and with the Hospital Anxiety and Depression Scale ( p < .05). The internal consistency was represented by Cronbach's alpha, which was .96 and .79 to .92 for the entire scale and subscales, respectively. Appropriate application of the ESDQ-C will be helpful to identify critical adjustment-related types of distress and patients who experience difficulty coping with such distress.
Rosen, Allyson; Weitlauf, Julie C
2015-01-01
A screening measure of capacity to consent can provide an efficient method of determining the appropriateness of including individuals from vulnerable patient populations in research, particularly in circumstances in which no caregiver is available to provide surrogate consent. Seaman et al. (2015) cross-validate a measure of capacity to consent to research developed by Jeste et al. (2007). They provide data on controls, caregivers, and patients with mild cognitive impairment and dementia. The study demonstrates the importance of validating measures across disorders with different domains of incapacity, as well as the need for timely and appropriate follow-up with potential participants who yield positive screens. Ultimately clinical measures need to adapt to the dimensional diagnostic approaches put forward in DSM 5. Integrative models of constructs, such as capacity to consent, will make this process more efficient by avoiding the need to test measures in each disorder. Until then, cross-validation studies, such as the work by Seaman et al. (2015) are critical.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Predicting protein-binding regions in RNA using nucleotide profiles and compositions.
Choi, Daesik; Park, Byungkyu; Chae, Hanju; Lee, Wook; Han, Kyungsook
2017-03-14
Motivated by the increased amount of data on protein-RNA interactions and the availability of complete genome sequences of several organisms, many computational methods have been proposed to predict binding sites in protein-RNA interactions. However, most computational methods are limited to finding RNA-binding sites in proteins instead of protein-binding sites in RNAs. Predicting protein-binding sites in RNA is more challenging than predicting RNA-binding sites in proteins. Recent computational methods for finding protein-binding sites in RNAs have several drawbacks for practical use. We developed a new support vector machine (SVM) model for predicting protein-binding regions in mRNA sequences. The model uses sequence profiles constructed from log-odds scores of mono- and di-nucleotides and nucleotide compositions. The model was evaluated by standard 10-fold cross validation, leave-one-protein-out (LOPO) cross validation and independent testing. Since actual mRNA sequences have more non-binding regions than protein-binding regions, we tested the model on several datasets with different ratios of protein-binding regions to non-binding regions. The best performance of the model was obtained in a balanced dataset of positive and negative instances. 10-fold cross validation with a balanced dataset achieved a sensitivity of 91.6%, a specificity of 92.4%, an accuracy of 92.0%, a positive predictive value (PPV) of 91.7%, a negative predictive value (NPV) of 92.3% and a Matthews correlation coefficient (MCC) of 0.840. LOPO cross validation showed a lower performance than the 10-fold cross validation, but the performance remains high (87.6% accuracy and 0.752 MCC). In testing the model on independent datasets, it achieved an accuracy of 82.2% and an MCC of 0.656. Testing of our model and other state-of-the-art methods on a same dataset showed that our model is better than the others. Sequence profiles of log-odds scores of mono- and di-nucleotides were much more powerful features than nucleotide compositions in finding protein-binding regions in RNA sequences. But, a slight performance gain was obtained when using the sequence profiles along with nucleotide compositions. These are preliminary results of ongoing research, but demonstrate the potential of our approach as a powerful predictor of protein-binding regions in RNA. The program and supporting data are available at http://bclab.inha.ac.kr/RBPbinding .
Scattering Cross Section of Sound Waves by the Modal Element Method
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1994-01-01
#he modal element method has been employed to determine the scattered field from a plane acoustic wave impinging on a two dimensional body. In the modal element method, the scattering body is represented by finite elements, which are coupled to an eigenfunction expansion representing the acoustic pressure in the infinite computational domain surrounding the body. The present paper extends the previous work by developing the algorithm necessary to calculate the acoustics scattering cross section by the modal element method. The scattering cross section is the acoustical equivalent to the Radar Cross Section (RCS) in electromagnetic theory. Since the scattering cross section is evaluated at infinite distance from the body, an asymptotic approximation is used in conjunction with the standard modal element method. For validation, the scattering cross section of the rigid circular cylinder is computed for the frequency range 0.1 is less than or equal to ka is less than or equal to 100. Results show excellent agreement with the analytic solution.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
NASA Technical Reports Server (NTRS)
Simanonok, K.; Mosely, E.; Charles, J.
1992-01-01
Nine preflight variables related to fluid, electrolyte, and cardiovascular status from 64 first-time Shuttle crewmembers were differentially weighted by discrimination analysis to predict the incidence and severity of each crewmember's space sickness as rated by NASA flight surgeons. The nine variables are serum uric acid, red cell count, environmental temperature at the launch site, serum phosphate, urine osmolality, serum thyroxine, sitting systolic blood pressure, calculated blood volume, and serum chloride. Using two methods of cross-validation on the original samples (jackknife and a stratefied random subsample), these variables enable the prediction of space sickness incidence (NONE or SICK) with 80 percent sickness and space severity (NONE, MILD, MODERATE, of SEVERE) with 59 percent success by one method of cross-validation and 67 percent by another method. Addition of a tenth variable, hours spent in the Weightlessness Environment Training Facility (WETF) did not improve the prediction of space sickness incidences but did improve the prediction of space sickness severity to 66 percent success by the first method of cross-validation of original samples and to 71 percent by the second method. Results to date suggest the presence of predisposing physiologic factors to space sickness that implicate fluid shift etiology. The data also suggest that prior exposure to fluid shift during WETF training may produce some circulatory pre-adaption to fluid shifts in weightlessness that results in a reduction of space sickness severity.
Translation, cross-cultural adaptation and validation of the Diabetes Empowerment Scale – Short Form
Chaves, Fernanda Figueredo; Reis, Ilka Afonso; Pagano, Adriana Silvina; Torres, Heloísa de Carvalho
2017-01-01
ABSTRACT OBJECTIVE To translate, cross-culturally adapt and validate the Diabetes Empowerment Scale – Short Form for assessment of psychosocial self-efficacy in diabetes care within the Brazilian cultural context. METHODS Assessment of the instrument’s conceptual equivalence, as well as its translation and cross-cultural adaptation were performed following international standards. The Expert Committee’s assessment of the translated version was conducted through a web questionnaire developed and applied via the web tool e-Surv. The cross-culturally adapted version was used for the pre-test, which was carried out via phone call in a group of eleven health care service users diagnosed with type 2 diabetes mellitus. The pre-test results were examined by a group of experts, composed by health care consultants, applied linguists and statisticians, aiming at an adequate version of the instrument, which was subsequently used for test and retest in a sample of 100 users diagnosed with type 2 diabetes mellitus via phone call, their answers being recorded by the web tool e-Surv. Internal consistency and reproducibility of analysis were carried out within the statistical programming environment R. RESULTS Face and content validity were attained and the Brazilian Portuguese version, entitled Escala de Autoeficácia em Diabetes – Versão Curta, was established. The scale had acceptable internal consistency with Cronbach’s alpha of 0.634 (95%CI 0.494– 0.737), while the correlation of the total score in the two periods was considered moderate (0.47). The intraclass correlation coefficient was 0.50. CONCLUSIONS The translated and cross-culturally adapted version of the instrument to spoken Brazilian Portuguese was considered valid and reliable to be used for assessment within the Brazilian population diagnosed with type 2 diabetes mellitus. The use of a web tool (e-Surv) for recording the Expert Committee responses as well as the responses in the validation tests proved to be a reliable, safe and innovative method. PMID:28355337
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
In-vitro Equilibrium Phosphate Binding Study of Sevelamer Carbonate by UV-Vis Spectrophotometry.
Prasaja, Budi; Syabani, M Maulana; Sari, Endah; Chilmi, Uci; Cahyaningsih, Prawitasari; Kosasih, Theresia Weliana
2018-06-12
Sevelamer carbonate is a cross-linked polymeric amine; it is the active ingredient in Renvela ® tablets. US FDA provides recommendation for demonstrating bioequivalence for the development of a generic product of sevelamer carbonte using in-vitro equilibrium binding study. A simple UV-vis spectrophotometry method was developed and validated for quantification of free phosphate to determine the binding parameter constant of sevelamer. The method validation demonstrated the specificity, limit of quantification, accuracy and precision of measurements. The validated method has been successfully used to analyze samples in in-vitro equilibrium binding study for demonstrating bioequivalence. © Georg Thieme Verlag KG Stuttgart · New York.
Genomic selection across multiple breeding cycles in applied bread wheat breeding.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2016-06-01
We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.
Game, Madhuri D.; Gabhane, K. B.; Sakarkar, D. M.
2010-01-01
A simple, accurate and precise spectrophotometric method has been developed for simultaneous estimation of clopidogrel bisulphate and aspirin by employing first order derivative zero crossing method. The first order derivative absorption at 232.5 nm (zero cross point of aspirin) was used for clopidogrel bisulphate and 211.3 nm (zero cross point of clopidogrel bisulphate) for aspirin.Both the drugs obeyed linearity in the concentration range of 5.0 μg/ml to 25.0 μg/ml (correlation coefficient r2<1). No interference was found between both determined constituents and those of matrix. The method was validated statistically and recovery studies were carried out to confirm the accuracy of the method. PMID:21969765
Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs
2016-09-27
No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.
Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.
2014-01-01
Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155
Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice
2014-05-07
Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William
2018-01-01
Background While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. Objective This paper reports the challenges of survey validation inherent in a small Web-based health survey research. Methods The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Results Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Conclusions Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. PMID:29691203
NASA Astrophysics Data System (ADS)
Folkert, Michael R.; Setton, Jeremy; Apte, Aditya P.; Grkovski, Milan; Young, Robert J.; Schöder, Heiko; Thorstad, Wade L.; Lee, Nancy Y.; Deasy, Joseph O.; Oh, Jung Hun
2017-07-01
In this study, we investigate the use of imaging feature-based outcomes research (‘radiomics’) combined with machine learning techniques to develop robust predictive models for the risk of all-cause mortality (ACM), local failure (LF), and distant metastasis (DM) following definitive chemoradiation therapy (CRT). One hundred seventy four patients with stage III-IV oropharyngeal cancer (OC) treated at our institution with CRT with retrievable pre- and post-treatment 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans were identified. From pre-treatment PET scans, 24 representative imaging features of FDG-avid disease regions were extracted. Using machine learning-based feature selection methods, multiparameter logistic regression models were built incorporating clinical factors and imaging features. All model building methods were tested by cross validation to avoid overfitting, and final outcome models were validated on an independent dataset from a collaborating institution. Multiparameter models were statistically significant on 5 fold cross validation with the area under the receiver operating characteristic curve (AUC) = 0.65 (p = 0.004), 0.73 (p = 0.026), and 0.66 (p = 0.015) for ACM, LF, and DM, respectively. The model for LF retained significance on the independent validation cohort with AUC = 0.68 (p = 0.029) whereas the models for ACM and DM did not reach statistical significance, but resulted in comparable predictive power to the 5 fold cross validation with AUC = 0.60 (p = 0.092) and 0.65 (p = 0.062), respectively. In the largest study of its kind to date, predictive features including increasing metabolic tumor volume, increasing image heterogeneity, and increasing tumor surface irregularity significantly correlated to mortality, LF, and DM on 5 fold cross validation in a relatively uniform single-institution cohort. The LF model also retained significance in an independent population.
Cross-cultural validity of four quality of life scales in persons with spinal cord injury
2010-01-01
Background Quality of life (QoL) in persons with spinal cord injury (SCI) has been found to differ across countries. However, comparability of measurement results between countries depends on the cross-cultural validity of the applied instruments. The study examined the metric quality and cross-cultural validity of the Satisfaction with Life Scale (SWLS), the Life Satisfaction Questionnaire (LISAT-9), the Personal Well-Being Index (PWI) and the 5-item World Health Organization Quality of Life Assessment (WHOQoL-5) across six countries in a sample of persons with spinal cord injury (SCI). Methods A cross-sectional multi-centre study was conducted and the data of 243 out-patients with SCI from study centers in Australia, Brazil, Canada, Israel, South Africa, and the United States were analyzed using Rasch-based methods. Results The analyses showed high reliability for all 4 instruments (person reliability index .78-.92). Unidimensionality of measurement was supported for the WHOQoL-5 (Chi2 = 16.43, df = 10, p = .088), partially supported for the PWI (Chi2 = 15.62, df = 16, p = .480), but rejected for the LISAT-9 (Chi2 = 50.60, df = 18, p = .000) and the SWLS (Chi2 = 78.54, df = 10, p = .000) based on overall and item-wise Chi2 tests, principal components analyses and independent t-tests. The response scales showed the expected ordering for the WHOQoL-5 and the PWI, but not for the other two instruments. Using differential item functioning (DIF) analyses potential cross-country bias was found in two items of the SWLS and the WHOQoL-5, three items of the LISAT-9 and four items of the PWI. However, applying Rasch-based statistical methods, especially subtest analyses, it was possible to identify optimal strategies to enhance the metric properties and the cross-country equivalence of the instruments post-hoc. Following the post-hoc procedures the WHOQOL-5 and the PWI worked in a consistent and expected way in all countries. Conclusions QoL assessment using the summary scores of the WHOQOL-5 and the PWI appeared cross-culturally valid in persons with SCI. In contrast, summary scores of the LISAT-9 and the SWLS have to be interpreted with caution. The findings of the current study can be especially helpful to select instruments for international research projects in SCI. PMID:20815864
Reliability and Validity of a Spanish Version of the Posttraumatic Growth Inventory
ERIC Educational Resources Information Center
Weiss, Tzipi; Berger, Roni
2006-01-01
Objectives. This study was designed to adapt and validate a Spanish translation of the Posttraumatic Growth Inventory (PTGI) for the assessment of positive life changes following the stressful experiences of immigration. Method. A cross-cultural equivalence model was used to pursue semantic, content, conceptual, and technical equivalence.…
PCA as a practical indicator of OPLS-DA model reliability.
Worley, Bradley; Powers, Robert
Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
Hettick, Justin M; Green, Brett J; Buskirk, Amanda D; Kashon, Michael L; Slaven, James E; Janotka, Erika; Blachere, Francoise M; Schmechel, Detlef; Beezhold, Donald H
2008-09-15
Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) was used to generate highly reproducible mass spectral fingerprints for 12 species of fungi of the genus Aspergillus and 5 different strains of Aspergillus flavus. Prior to MALDI-TOF MS analysis, the fungi were subjected to three 1-min bead beating cycles in an acetonitrile/trifluoroacetic acid solvent. The mass spectra contain abundant peaks in the range of 5 to 20kDa and may be used to discriminate between species unambiguously. A discriminant analysis using all peaks from the MALDI-TOF MS data yielded error rates for classification of 0 and 18.75% for resubstitution and cross-validation methods, respectively. If a subset of 28 significant peaks is chosen, resubstitution and cross-validation error rates are 0%. Discriminant analysis of the MALDI-TOF MS data for 5 strains of A. flavus using all peaks yielded error rates for classification of 0 and 5% for resubstitution and cross-validation methods, respectively. These data indicate that MALDI-TOF MS data may be used for unambiguous identification of members of the genus Aspergillus at both the species and strain levels.
Quantitative determination and classification of energy drinks using near-infrared spectroscopy.
Rácz, Anita; Héberger, Károly; Fodor, Marietta
2016-09-01
Almost a hundred commercially available energy drink samples from Hungary, Slovakia, and Greece were collected for the quantitative determination of their caffeine and sugar content with FT-NIR spectroscopy and high-performance liquid chromatography (HPLC). Calibration models were built with partial least-squares regression (PLSR). An HPLC-UV method was used to measure the reference values for caffeine content, while sugar contents were measured with the Schoorl method. Both the nominal sugar content (as indicated on the cans) and the measured sugar concentration were used as references. Although the Schoorl method has larger error and bias, appropriate models could be developed using both references. The validation of the models was based on sevenfold cross-validation and external validation. FT-NIR analysis is a good candidate to replace the HPLC-UV method, because it is much cheaper than any chromatographic method, while it is also more time-efficient. The combination of FT-NIR with multidimensional chemometric techniques like PLSR can be a good option for the detection of low caffeine concentrations in energy drinks. Moreover, three types of energy drinks that contain (i) taurine, (ii) arginine, and (iii) none of these two components were classified correctly using principal component analysis and linear discriminant analysis. Such classifications are important for the detection of adulterated samples and for quality control, as well. In this case, more than a hundred samples were used for the evaluation. The classification was validated with cross-validation and several randomization tests (X-scrambling). Graphical Abstract The way of energy drinks from cans to appropriate chemometric models.
Computing discharge using the index velocity method
Levesque, Victor A.; Oberg, Kevin A.
2012-01-01
Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.
Reproducibility and validity of a semi-quantitative FFQ for trace elements.
Lee, Yujin; Park, Kyong
2016-09-01
The aim of this study was to test the reproducibility and validity of a self-administered FFQ for the Trace Element Study of Korean Adults in the Yeungnam area (SELEN). Study subjects were recruited from the SELEN cohort selected from rural and urban areas in Yeungnam, Korea. A semi-quantitative FFQ with 146 items was developed considering the dietary characteristics of cohorts in the study area. In a validation study, seventeen men and forty-eight women aged 38-62 years completed 3-d dietary records (DR) and two FFQ over a 3-month period. The validity was examined with the FFQ and DR, and the reproducibility was estimated using partial correlation coefficients, the Bland-Altman method and cross-classification. There were no significant differences between the mean intakes of selected nutrients as estimated from FFQ1, FFQ2 and DR. The median correlation coefficients for all nutrients were 0·47 and 0·56 in the reproducibility and validity tests, respectively. Bland-Altman's index and cross-classification showed acceptable agreement between FFQ1 and FFQ2 and between FFQ2 and DR. Ultimately, 78 % of the subjects were classified into the same and adjacent quartiles for most nutrients. In addition, the weighted κ value indicated that the two methods agreed fairly. In conclusion, this newly developed FFQ was a suitable dietary assessment method for the SELEN cohort study.
Strong-potential Born calculations for 1s-1s electron capture from atoms by protons
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuire, J.H.; Kletke, R.E.; Sil, N.C.
1985-08-01
The strong-potential Born (SPB) approximation is examined by comparing various SPB calculations of high-velocity 1s-1s electron capture cross sections with one another and with experimental data. Above about 1 MeV, calculations using the SPB method of McGuire and Sil (SPMS) (Phys. Rev. A 28, 3679 (1983)) are in good agreement with total-cross-section observations for protons on H, He, C, Ne, and Ar as expected. For p+H and p+He, the SPB full-peaking (SPB-FP) approximation of Macek and Alston (Phys. Rev. A 26, 250 (1982)) and the SPB transverse-peaking (SPB-TP) approximation of Alston (Phys. Rev. A 27, 2342 (1982)) differ from ourmore » SPMS total cross sections by typically a factor of 2, as expected from general validity criteria. However, the differential cross sections at very forward angles (well within the Thomas angle) are the same in SPMS, SPB-FP, and SPB-TP methods in all cases. Below 1 MeV, cross sections obtained with use of various SPB methods differ considerably from one another, placing a limit of validity for these SPB calculations. We also suggest that in the gap between those energies where continuum intermediate states simply dominate, and above those energies where bound intermediate states simply dominate, detailed conceptual understanding of electron capture is incomplete.« less
Cheng, Feixiong; Shen, Jie; Yu, Yue; Li, Weihua; Liu, Guixia; Lee, Philip W; Tang, Yun
2011-03-01
There is an increasing need for the rapid safety assessment of chemicals by both industries and regulatory agencies throughout the world. In silico techniques are practical alternatives in the environmental hazard assessment. It is especially true to address the persistence, bioaccumulative and toxicity potentials of organic chemicals. Tetrahymena pyriformis toxicity is often used as a toxic endpoint. In this study, 1571 diverse unique chemicals were collected from the literature and composed of the largest diverse data set for T. pyriformis toxicity. Classification predictive models of T. pyriformis toxicity were developed by substructure pattern recognition and different machine learning methods, including support vector machine (SVM), C4.5 decision tree, k-nearest neighbors and random forest. The results of a 5-fold cross-validation showed that the SVM method performed better than other algorithms. The overall predictive accuracies of the SVM classification model with radial basis functions kernel was 92.2% for the 5-fold cross-validation and 92.6% for the external validation set, respectively. Furthermore, several representative substructure patterns for characterizing T. pyriformis toxicity were also identified via the information gain analysis methods. Copyright © 2010 Elsevier Ltd. All rights reserved.
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730
Schmidt, Stine N; Wang, Alice P; Gidley, Philip T; Wooley, Allyson H; Lotufo, Guilherme R; Burgess, Robert M; Ghosh, Upal; Fernandez, Loretta A; Mayer, Philipp
2017-09-05
The Gold Standard for determining freely dissolved concentrations (C free ) of hydrophobic organic compounds in sediment interstitial water would be in situ deployment combined with equilibrium sampling, which is generally difficult to achieve. In the present study, ex situ equilibrium sampling with multiple thicknesses of silicone and in situ pre-equilibrium sampling with low density polyethylene (LDPE) loaded with performance reference compounds were applied independently to measure polychlorinated biphenyls (PCBs) in mesocosms with (1) New Bedford Harbor sediment (MA, U.S.A.), (2) sediment and biota, and (3) activated carbon amended sediment and biota. The aim was to cross validate the two different sampling approaches. Around 100 PCB congeners were quantified in the two sampling polymers, and the results confirmed the good precision of both methods and were in overall good agreement with recently published LDPE to silicone partition ratios. Further, the methods yielded C free in good agreement for all three experiments. The average ratio between C free determined by the two methods was factor 1.4 ± 0.3 (range: 0.6-2.0), and the results thus cross-validated the two sampling approaches. For future investigations, specific aims and requirements in terms of application, data treatment, and data quality requirements should dictate the selection of the most appropriate partitioning-based sampling approach.
Jain, Meena; Tandon, Shourya; Sharma, Ankur; Jain, Vishal; Rani Yadav, Nisha
2018-01-01
Background: An appropriate scale to assess the dental anxiety of Hindi speaking population is lacking. This study, therefore, aims to evaluate the psychometric properties of Hindi version of one of the oldest dental anxiety scale, Corah’s Dental Anxiety Scale (CDAS) in Hindi speaking Indian adults. Methods: A total of 348 subjects from the outpatient department of a dental hospital in India participated in this cross-sectional study. The scale was cross-culturally adapted by forward and backward translation, committee review and pretesting method. The construct validity of the translated scale was explored with exploratory factor analysis. The correlation of the Hindi version of CDAS with visual analogue scale (VAS) was used to measure the convergent validity. Reliability was assessed through calculations of Cronbach’s alpha and intra class correlation 48 forms were completed for test-retest. Results: Prevalence of dental anxiety in the sample within the age range of 18-80 years was 85.63% [95% CI: 0.815-0.891]. The response rate was 100 %. Kaiser-Meyer-Olkin (KMO) test value was 0.776. After factor analysis, a single factor (dental anxiety) was obtained with 4 items.The single factor model explained 61% variance. Pearson correlation coefficient between CDASand VAS was 0.494. Test-retest showed the Cronbach’s alpha value of 0.814. The test-retest intraclass correlation coefficient of the total CDAS score was 0.881 [95% CI: 0.318-0.554]. Conclusion: Hindi version of CDAS is a valid and reliable scale to assess dental anxiety in Hindi speaking population. Convergent validity is well recognized but discriminant validity is limited and requires further study. PMID:29744307
Jain, Meena; Tandon, Shourya; Sharma, Ankur; Jain, Vishal; Rani Yadav, Nisha
2018-01-01
Background: An appropriate scale to assess the dental anxiety of Hindi speaking population is lacking. This study, therefore, aims to evaluate the psychometric properties of Hindi version of one of the oldest dental anxiety scale, Corah's Dental Anxiety Scale (CDAS) in Hindi speaking Indian adults. Methods: A total of 348 subjects from the outpatient department of a dental hospital in India participated in this cross-sectional study. The scale was cross-culturally adapted by forward and backward translation, committee review and pretesting method. The construct validity of the translated scale was explored with exploratory factor analysis. The correlation of the Hindi version of CDAS with visual analogue scale (VAS) was used to measure the convergent validity. Reliability was assessed through calculations of Cronbach's alpha and intra class correlation 48 forms were completed for test-retest. Results: Prevalence of dental anxiety in the sample within the age range of 18-80 years was 85.63% [95% CI: 0.815-0.891]. The response rate was 100 %. Kaiser-Meyer-Olkin (KMO) test value was 0.776. After factor analysis, a single factor (dental anxiety) was obtained with 4 items.The single factor model explained 61% variance. Pearson correlation coefficient between CDASand VAS was 0.494. Test-retest showed the Cronbach's alpha value of 0.814. The test-retest intraclass correlation coefficient of the total CDAS score was 0.881 [95% CI: 0.318-0.554]. Conclusion: Hindi version of CDAS is a valid and reliable scale to assess dental anxiety in Hindi speaking population. Convergent validity is well recognized but discriminant validity is limited and requires further study.
Lam, Simon C
2014-05-01
To perform detailed psychometric testing of the compliance with standard precautions scale (CSPS) in measuring compliance with standard precautions of clinical nurses and to conduct cross-cultural pilot testing and assess the relevance of the CSPS on an international platform. A cross-sectional and correlational design with repeated measures. Nursing students from a local registered nurse training university, nurses from different hospitals in Hong Kong, and experts in an international conference. The psychometric properties of the CSPS were evaluated via internal consistency, 2-week and 3-month test-retest reliability, concurrent validation, and construct validation. The cross-cultural pilot testing and relevance check was examined by experts on infection control from various developed and developing regions. Among 453 participants, 193 were nursing students, 165 were enrolled nurses, and 95 were registered nurses. The results showed that the CSPS had satisfactory reliability (Cronbach α = 0.73; intraclass correlation coefficient, 0.79 for 2-week test-retest and 0.74 for 3-month test-retest) and validity (optimum correlation with criterion measure; r = 0.76, P < .001; satisfactory results on known-group method and hypothesis testing). A total of 19 experts from 16 countries assured that most of the CSPS findings were relevant and globally applicable. The CSPS demonstrated satisfactory results on the basis of the standard international criteria on psychometric testing, which ascertained the reliability and validity of this instrument in measuring the compliance of clinical nurses with standard precautions. The cross-cultural pilot testing further reinforced the instrument's relevance and applicability in most developed and developing regions.
Validation of Yoon's Critical Thinking Disposition Instrument.
Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin
2015-12-01
The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.
The Trojan Horse Method in nuclear astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spitaleri, C., E-mail: spitaleri@lns.infn.it; Mukhamedzhanov, A. M.; Blokhintsev, L. D.
2011-12-15
The study of energy production and nucleosynthesis in stars requires an increasingly precise knowledge of the nuclear reaction rates at the energies of interest. To overcome the experimental difficulties arising from the small cross sections at those energies and from the presence of the electron screening, the Trojan Horse Method has been introduced. The method provides a valid alternative path to measure unscreened low-energy cross sections of reactions between charged particles, and to retrieve information on the electron screening potential when ultra-low energy direct measurements are available.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
Spanish Adaptation and Validation of the Family Quality of Life Survey
ERIC Educational Resources Information Center
Verdugo, M. A.; Cordoba, L.; Gomez, J.
2005-01-01
Background: Assessing the quality of life (QOL) for families that include a person with a disability have recently become a major emphasis in cross-cultural QOL studies. The present study examined the reliability and validity of the Family Quality of Life Survey (FQOL) on a Spanish sample. Method and Results: The sample comprised 385 families who…
Validity and Reliability of Internalized Stigma of Mental Illness (Cantonese)
ERIC Educational Resources Information Center
Young, Daniel Kim-Wan; Ng, Petrus Y. N.; Pan, Jia-Yan; Cheng, Daphne
2017-01-01
Purpose: This study aims to translate and test the reliability and validity of the Internalized Stigma of Mental Illness-Cantonese (ISMI-C). Methods: The original English version of ISMI is translated into the ISMI-C by going through forward and backward translation procedure. A cross-sectional research design is adopted that involved 295…
Cross-Validation of a PACER Prediction Equation for Assessing Aerobic Capacity in Hungarian Youth
ERIC Educational Resources Information Center
Saint-Maurice, Pedro F.; Welk, Gregory J.; Finn, Kevin J.; Kaj, Mónika
2015-01-01
Purpose: The purpose of this article was to evaluate the validity of the Progressive Aerobic Cardiovascular and Endurance Run (PACER) test in a sample of Hungarian youth. Method: Approximately 500 participants (aged 10-18 years old) were randomly selected across Hungary to complete both laboratory (maximal treadmill protocol) and field assessments…
Shafeei, Asrin; Mokhtarinia, Hamid Reza; Maleki-Ghahfarokhi, Azam; Piri, Leila
2017-08-01
Observational study. To cross-culturally translate the Orebro Musculoskeletal Pain Screening Questionnaire (OMPQ) into Persian and then evaluate its psychometric properties (reliability, validity, ceiling, and flooring effects). To the authors' knowledge, prior to this study there has been no validated instrument to screen the risk of chronicity in Persian-speaking patients with low back pain (LBP) in Iran. The OMPQ was specifically developed as a self-administered screening tool for assessing the risk of LBP chronicity. The forward-backward translation method was used for the translation and cross-cultural adaptation of the original questionnaire. In total, 202 patients with subacute LBP completed the OMPQ and the pain disability questionnaire (PDQ), which was used to assess convergent validity. 62 patients completed the OMPQ a week later as a retest. Slight changes were made to the OMPQ during the translation/cultural adaptation process; face validity of the Persian version was obtained. The Persian OMPQ showed excellent test-retest reliability (intraclass correlation coefficient=0.89). Its internal consistency was 0.71, and its convergent validity was confirmed by good correlation coefficient between the OMPQ and PDQ total scores ( r =0.72, p <0.05). No ceiling or floor effects were observed. The Persian version of the OMPQ is acceptable for the target society in terms of face validity, construct validity, reliability, and consistency. It is therefore considered a useful instrument for screening Iranian patients with LBP.
NASA Astrophysics Data System (ADS)
Xin, L.; Markine, V. L.; Shevtsov, I. Y.
2016-03-01
A three-dimensional (3-D) explicit dynamic finite element (FE) model is developed to simulate the impact of the wheel on the crossing nose. The model consists of a wheel set moving over the turnout crossing. Realistic wheel, wing rail and crossing geometries have been used in the model. Using this model the dynamic responses of the system such as the contact forces between the wheel and the crossing, crossing nose displacements and accelerations, stresses in rail material as well as in sleepers and ballast can be obtained. Detailed analysis of the wheel set and crossing interaction using the local contact stress state in the rail is possible as well, which provides a good basis for prediction of the long-term behaviour of the crossing (fatigue analysis). In order to tune and validate the FE model field measurements conducted on several turnouts in the railway network in the Netherlands are used here. The parametric study including variations of the crossing nose geometries performed here demonstrates the capabilities of the developed model. The results of the validation and parametric study are presented and discussed.
The property distance index PD predicts peptides that cross-react with IgE antibodies
Ivanciuc, Ovidiu; Midoro-Horiuti, Terumi; Schein, Catherine H.; Xie, Liping; Hillman, Gilbert R.; Goldblum, Randall M.; Braun, Werner
2009-01-01
Similarities in the sequence and structure of allergens can explain clinically observed cross-reactivities. Distinguishing sequences that bind IgE in patient sera can be used to identify potentially allergenic protein sequences and aid in the design of hypo-allergenic proteins. The property distance index PD, incorporated in our Structural Database of Allergenic Proteins (SDAP, http://fermi.utmb.edu/SDAP/), may identify potentially cross-reactive segments of proteins, based on their similarity to known IgE epitopes. We sought to obtain experimental validation of the PD index as a quantitative predictor of IgE cross-reactivity, by designing peptide variants with predetermined PD scores relative to three linear IgE epitopes of Jun a 1, the dominant allergen from mountain cedar pollen. For each of the three epitopes, 60 peptides were designed with increasing PD values (decreasing physicochemical similarity) to the starting sequence. The peptides synthesized on a derivatized cellulose membrane were probed with sera from patients who were allergic to Jun a 1, and the experimental data were interpreted with a PD classification method. Peptides with low PD values relative to a given epitope were more likely to bind IgE from the sera than were those with PD values larger than 6. Control sequences, with PD values between 18 and 20 to all the three epitopes, did not bind patient IgE, thus validating our procedure for identifying negative control peptides. The PD index is a statistically validated method to detect discrete regions of proteins that have a high probability of cross-reacting with IgE from allergic patients. PMID:18950868
Saub, R; Locker, D; Allison, P
2008-09-01
To compare two methods of developing short forms of the Malaysian Oral Health Impact Profile (OHIP-M) measure. Cross sectional data obtained using the long form of the OHIP-M was used to produce two types of OHIP-M short forms, derived using two different methods; namely regression and item frequency methods. The short version derived using a regression method is known as Reg-SOHIP(M) and that derived using a frequency method is known as Freq-SOHIP(M). Both short forms contained 14 items. These two forms were then compared in terms of their content, scores, reliability, validity and the ability to distinguish between groups. Out of 14 items, only four were in common. The form derived from the frequency method contained more high prevalence items and higher scores than the form derived from the regression method. Both methods produced a reliable and valid measure. However, the frequency method produced a measure, which was slightly better in terms of distinguishing between groups. Regardless of the method used to produce the measures, both forms performed equally well when tested for their cross-sectional psychometric properties.
Castillo-Tandazo, Wilson; Flores-Fortty, Adolfo; Feraud, Lourdes; Tettamanti, Daniel
2013-01-01
Purpose To translate, cross-culturally adapt, and validate the Questionnaire for Diabetes-Related Foot Disease (Q-DFD), originally created and validated in Australia, for its use in Spanish-speaking patients with diabetes mellitus. Patients and methods The translation and cross-cultural adaptation were based on international guidelines. The Spanish version of the survey was applied to a community-based (sample A) and a hospital clinic-based sample (samples B and C). Samples A and B were used to determine criterion and construct validity comparing the survey findings with clinical evaluation and medical records, respectively; while sample C was used to determine intra- and inter-rater reliability. Results After completing the rigorous translation process, only four items were considered problematic and required a new translation. In total, 127 patients were included in the validation study: 76 to determine criterion and construct validity and 41 to establish intra- and inter-rater reliability. For an overall diagnosis of diabetes-related foot disease, a substantial level of agreement was obtained when we compared the Q-DFD with the clinical assessment (kappa 0.77, sensitivity 80.4%, specificity 91.5%, positive likelihood ratio [LR+] 9.46, negative likelihood ratio [LR−] 0.21); while an almost perfect level of agreement was obtained when it was compared with medical records (kappa 0.88, sensitivity 87%, specificity 97%, LR+ 29.0, LR− 0.13). Survey reliability showed substantial levels of agreement, with kappa scores of 0.63 and 0.73 for intra- and inter-rater reliability, respectively. Conclusion The translated and cross-culturally adapted Q-DFD showed good psychometric properties (validity, reproducibility, and reliability) that allow its use in Spanish-speaking diabetic populations. PMID:24039434
2017-01-01
Objective To perform a translation and cross-cultural adaptation of the Cardiac Rehabilitation Barriers Scale (CRBS) for use in Korea, followed by psychometric validation. The CRBS was developed to assess patients' perception of the degree to which patient, provider and health system-level barriers affect their cardiac rehabilitation (CR) participation. Methods The CRBS consists of 21 items (barriers to adherence) rated on a 5-point Likert scale. The first phase was to translate and cross-culturally adapt the CRBS to the Korean language. After back-translation, both versions were reviewed by a committee. The face validity was assessed in a sample of Korean patients (n=53) with history of acute myocardial infarction that did not participate in CR through semi-structured interviews. The second phase was to assess the construct and criterion validity of the Korean translation as well as internal reliability, through administration of the translated version in 104 patients, principle component analysis with varimax rotation and cross-referencing against CR use, respectively. Results The length, readability, and clarity of the questionnaire were rated well, demonstrating face validity. Analysis revealed a six-factor solution, demonstrating construct validity. Cronbach's alpha was greater than 0.65. Barriers rated highest included not knowing about CR and not being contacted by a program. The mean CRBS score was significantly higher among non-attendees (2.71±0.26) than CR attendees (2.51±0.18) (p<0.01). Conclusion The Korean version of CRBS has demonstrated face, content and criterion validity, suggesting it may be useful for assessing barriers to CR utilization in Korea. PMID:29201826
Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach
2014-01-01
Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population settings. PMID:25285151
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
Scale for positive aspects of caregiving experience: development, reliability, and factor structure.
Kate, N; Grover, S; Kulhara, P; Nehra, R
2012-06-01
OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.
Progress toward the determination of correct classification rates in fire debris analysis.
Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E
2013-07-01
Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.
Sex estimation from measurements of the first rib in a contemporary Polish population.
Kubicka, Anna Maria; Piontek, Janusz
2016-01-01
The aim of this study was to evaluate the accuracy of sex assessment using measurements of the first rib from computed tomography (CT) to develop a discriminant formula. Four discriminant formulae were derived based on CT imaging of the right first rib of 85 female and 91 male Polish patients of known age and sex. In direct discriminant analysis, the first equation consisted of all first rib variables; the second included measurements of the rib body; the third comprised only two measurements of the sternal end of the first rib. The stepwise method selected the four best variables from all measurements. The discriminant function equation was then tested on a cross-validated group consisting of 23 females and 24 males. The direct discriminant analysis showed that sex assessment was possible in 81.5% of cases in the first group and in 91.5% in the cross-validated group when all variables for the first rib were included. The average accuracy for the original group for rib body and sternal end was 80.9 and 67.9%, respectively. The percentages of correctly assigned individuals for the functions based on the rib body and sternal end in the cross-validated group were 76.6 and 85.0%, respectively. Higher average accuracies were obtained for stepwise discriminant analysis: 83.1% for the original group and 91.2% for the cross-validated group. The exterior edge, anterior-posterior of the sternal end, and depth of the arc were the most reliable parameters. Our results suggest that the first rib is dimorphic and that the described method can be used for sex assessment.
A General Method for Targeted Quantitative Cross-Linking Mass Spectrometry.
Chavez, Juan D; Eng, Jimmy K; Schweppe, Devin K; Cilia, Michelle; Rivera, Keith; Zhong, Xuefei; Wu, Xia; Allen, Terrence; Khurgel, Moshe; Kumar, Akhilesh; Lampropoulos, Athanasios; Larsson, Mårten; Maity, Shuvadeep; Morozov, Yaroslav; Pathmasiri, Wimal; Perez-Neut, Mathew; Pineyro-Ruiz, Coriness; Polina, Elizabeth; Post, Stephanie; Rider, Mark; Tokmina-Roszyk, Dorota; Tyson, Katherine; Vieira Parrine Sant'Ana, Debora; Bruce, James E
2016-01-01
Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NMR and cryo-electron microscopy[1]. The extension of traditional quantitative proteomics methods with chemical cross-linking can provide information on the structural dynamics of protein structures and protein complexes. The identification and quantitation of cross-linked peptides remains challenging for the general community, requiring specialized expertise ultimately limiting more widespread adoption of the technique. We describe a general method for targeted quantitative mass spectrometric analysis of cross-linked peptide pairs. We report the adaptation of the widely used, open source software package Skyline, for the analysis of quantitative XL-MS data as a means for data analysis and sharing of methods. We demonstrate the utility and robustness of the method with a cross-laboratory study and present data that is supported by and validates previously published data on quantified cross-linked peptide pairs. This advance provides an easy to use resource so that any lab with access to a LC-MS system capable of performing targeted quantitative analysis can quickly and accurately measure dynamic changes in protein structure and protein interactions.
Duarte Bonini Campos, J A; Dias do Prado, C
2012-01-01
The cross-cultural adaptation of the Patient-Generated Subjective Global Assessment is important so it can be used with confidence in Portuguese language. To perform a cross-cultural adaptation of the Portuguese version of the Patient-Generated Subjective Global Assessment and to estimate its intrarater reliability. This is a validation study. Face Validity was classified by 17 health professionals and 10 Portuguese language specialists. Idiomatic, semantic, cultural and conceptual equivalences were analyzed. The questionnaire was completed by 20 patients of the Amaral Carvalho Hospital (Jaú, São Paulo, Brazil) in order to verify the Comprehension Index of each item. Therefore, 27 committee members classified each item into "essential", "useful, but not essential" and "not necessary", in order to calculate the Content Validity Ratio. After, this version of the questionnaire was applied twice to 62 patients of the hospital cited above. The intrarater reliability of the nutritional status analyzed by Patient-Generated Subjective Global Assessment was estimated by Kappa statistics. The Portuguese version of the Patient-Generated Subjective Global Assessment presented 10 incomprehensible expressions. The items "a year ago weight" and "dry mouth symptom" presented the lowest Content Validity Ratio. Substantial intrarater reliability (k = 0.78, p = 0.001) was observed. The cross-cultural adaptation of the Portuguese version of the Patient-Generated Subjective Global Assessment became simple and understandable for Brazilian patients. Thus, this version of the Patient-Generated Subjective Global Assessment was considered a valid and a reliable method.
NASA Astrophysics Data System (ADS)
Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander
2017-04-01
Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.
NASA Astrophysics Data System (ADS)
Pescarini, M.; Sinitsa, V.; Orsi, R.; Frisoni, M.
2013-03-01
This paper presents a synthesis of the ENEA-Bologna Nuclear Data Group programme dedicated to generate and validate group-wise cross section libraries for shielding and radiation damage deterministic calculations in nuclear fission reactors, following the data processing methodology recommended in the ANSI/ANS-6.1.2-1999 (R2009) American Standard. The VITJEFF311.BOLIB and VITENDF70.BOLIB finegroup coupled n-γ (199 n + 42 γ - VITAMIN-B6 structure) multi-purpose cross section libraries, based on the Bondarenko method for neutron resonance self-shielding and respectively on JEFF-3.1.1 and ENDF/B-VII.0 evaluated nuclear data, were produced in AMPX format using the NJOY-99.259 and the ENEA-Bologna 2007 Revision of the SCAMPI nuclear data processing systems. Two derived broad-group coupled n-γ (47 n + 20 γ - BUGLE-96 structure) working cross section libraries in FIDO-ANISN format for LWR shielding and pressure vessel dosimetry calculations, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, were generated by the revised version of SCAMPI, through problem-dependent cross section collapsing and self-shielding from the cited fine-group libraries. The validation results on the criticality safety benchmark experiments for the fine-group libraries and the preliminary validation results for the broad-group working libraries on the PCA-Replica and VENUS-3 engineering neutron shielding benchmark experiments are reported in synthesis.
PIPE: a protein–protein interaction passage extraction module for BioCreative challenge
Chu, Chun-Han; Su, Yu-Chen; Chen, Chien Chin; Hsu, Wen-Lian
2016-01-01
Identifying the interactions between proteins mentioned in biomedical literatures is one of the frequently discussed topics of text mining in the life science field. In this article, we propose PIPE, an interaction pattern generation module used in the Collaborative Biocurator Assistant Task at BioCreative V (http://www.biocreative.org/) to capture frequent protein-protein interaction (PPI) patterns within text. We also present an interaction pattern tree (IPT) kernel method that integrates the PPI patterns with convolution tree kernel (CTK) to extract PPIs. Methods were evaluated on LLL, IEPA, HPRD50, AIMed and BioInfer corpora using cross-validation, cross-learning and cross-corpus evaluation. Empirical evaluations demonstrate that our method is effective and outperforms several well-known PPI extraction methods. Database URL: PMID:27524807
Measuring cognition in teams: a cross-domain review.
Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R
2014-08-01
The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S
2013-01-01
Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.
A Validation Study of the Revised Personal Safety Decision Scale
ERIC Educational Resources Information Center
Kim, HaeJung; Hopkins, Karen M.
2017-01-01
Objective: The purpose of this study is to examine the reliability and validity of an 11-item Personal Safety Decision Scale (PSDS) in a sample of child welfare workers. Methods: Data were derived from a larger cross-sectional online survey to a random stratified sample of 477 public child welfare workers in a mid-Atlantic State. An exploratory…
Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael
2014-01-01
Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.
Development of a Bayesian model to estimate health care outcomes in the severely wounded
Stojadinovic, Alexander; Eberhardt, John; Brown, Trevor S; Hawksworth, Jason S; Gage, Frederick; Tadaki, Douglas K; Forsberg, Jonathan A; Davis, Thomas A; Potter, Benjamin K; Dunne, James R; Elster, E A
2010-01-01
Background: Graphical probabilistic models have the ability to provide insights as to how clinical factors are conditionally related. These models can be used to help us understand factors influencing health care outcomes and resource utilization, and to estimate morbidity and clinical outcomes in trauma patient populations. Study design: Thirty-two combat casualties with severe extremity injuries enrolled in a prospective observational study were analyzed using step-wise machine-learned Bayesian belief network (BBN) and step-wise logistic regression (LR). Models were evaluated using 10-fold cross-validation to calculate area-under-the-curve (AUC) from receiver operating characteristics (ROC) curves. Results: Our BBN showed important associations between various factors in our data set that could not be developed using standard regression methods. Cross-validated ROC curve analysis showed that our BBN model was a robust representation of our data domain and that LR models trained on these findings were also robust: hospital-acquired infection (AUC: LR, 0.81; BBN, 0.79), intensive care unit length of stay (AUC: LR, 0.97; BBN, 0.81), and wound healing (AUC: LR, 0.91; BBN, 0.72) showed strong AUC. Conclusions: A BBN model can effectively represent clinical outcomes and biomarkers in patients hospitalized after severe wounding, and is confirmed by 10-fold cross-validation and further confirmed through logistic regression modeling. The method warrants further development and independent validation in other, more diverse patient populations. PMID:21197361
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
Using the method of explicit summation over the intermediate states two-photon absorption cross sections in light and intermediate atoms based on the simplistic frozen-core approximation and LS coupling have been formulated. Formulas for the cross section in terms of integrals over radial wave functions are given. Two selection rules, one exact and one approximate, valid within the stated approximations are derived. The formulas are applied to two-photon absorptions in nitrogen, oxygen, and chlorine. In evaluating the radial integrals, for low-lying levels, the Hartree-Fock wave functions, and for high-lying levels, hydrogenic wave functions obtained by the quantum-defect method have been used. A relationship between the cross section and the oscillator strengths is derived.
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study
NASA Astrophysics Data System (ADS)
Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.
2008-06-01
Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.
Schadl, Kornél; Vassar, Rachel; Cahill-Rowley, Katelyn; Yeom, Kristin W; Stevenson, David K; Rose, Jessica
2018-01-01
Advanced neuroimaging and computational methods offer opportunities for more accurate prognosis. We hypothesized that near-term regional white matter (WM) microstructure, assessed on diffusion tensor imaging (DTI), using exhaustive feature selection with cross-validation would predict neurodevelopment in preterm children. Near-term MRI and DTI obtained at 36.6 ± 1.8 weeks postmenstrual age in 66 very-low-birth-weight preterm neonates were assessed. 60/66 had follow-up neurodevelopmental evaluation with Bayley Scales of Infant-Toddler Development, 3rd-edition (BSID-III) at 18-22 months. Linear models with exhaustive feature selection and leave-one-out cross-validation computed based on DTI identified sets of three brain regions most predictive of cognitive and motor function; logistic regression models were computed to classify high-risk infants scoring one standard deviation below mean. Cognitive impairment was predicted (100% sensitivity, 100% specificity; AUC = 1) by near-term right middle-temporal gyrus MD, right cingulate-cingulum MD, left caudate MD. Motor impairment was predicted (90% sensitivity, 86% specificity; AUC = 0.912) by left precuneus FA, right superior occipital gyrus MD, right hippocampus FA. Cognitive score variance was explained (29.6%, cross-validated Rˆ2 = 0.296) by left posterior-limb-of-internal-capsule MD, Genu RD, right fusiform gyrus AD. Motor score variance was explained (31.7%, cross-validated Rˆ2 = 0.317) by left posterior-limb-of-internal-capsule MD, right parahippocampal gyrus AD, right middle-temporal gyrus AD. Search in large DTI feature space more accurately identified neonatal neuroimaging correlates of neurodevelopment.
Mueller, David S.
2013-01-01
profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei
2016-03-01
Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.
Bairy, Santhosh Kumar; Suneel Kumar, B V S; Bhalla, Joseph Uday Tej; Pramod, A B; Ravikumar, Muttineni
2009-04-01
c-Src kinase play an important role in cell growth and differentiation and its inhibitors can be useful for the treatment of various diseases, including cancer, osteoporosis, and metastatic bone disease. Three dimensional quantitative structure-activity relationship (3D-QSAR) studies were carried out on quinazolin derivatives inhibiting c-Src kinase. Molecular field analysis (MFA) models with four different alignment techniques, namely, GLIDE, GOLD, LIGANDFIT and Least squares based methods were developed. glide based MFA model showed better results (Leave one out cross validation correlation coefficient r(2)(cv) = 0.923 and non-cross validation correlation coefficient r(2)= 0.958) when compared with other models. These results help us to understand the nature of descriptors required for activity of these compounds and thereby provide guidelines to design novel and potent c-Src kinase inhibitors.
Cross-validated detection of crack initiation in aerospace materials
NASA Astrophysics Data System (ADS)
Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios
2014-03-01
A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-06-01
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming
2017-07-24
Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.
Lohrer, Heinz; Nauck, Tanja
2009-01-01
Background Achilles tendinopathy is the predominant overuse injury in runners. To further investigate this overload injury in transverse and longitudinal studies a valid, responsive and reliable outcome measure is demanded. Most questionnaires have been developed for English-speaking populations. This is also true for the VISA-A score, so far representing the only valid, reliable, and disease specific questionnaire for Achilles tendinopathy. To internationally compare research results, to perform multinational studies or to exclude bias originating from subpopulations speaking different languages within one country an equivalent instrument is demanded in different languages. The aim of this study was therefore to cross-cultural adapt and validate the VISA-A questionnaire for German-speaking Achilles tendinopathy patients. Methods According to the "guidelines for the process of cross-cultural adaptation of self-report measures" the VISA-A score was cross-culturally adapted into German (VISA-A-G) using six steps: Translation, synthesis, back translation, expert committee review, pretesting (n = 77), and appraisal of the adaptation process by an advisory committee determining the adequacy of the cross-cultural adaptation. The resulting VISA-A-G was then subjected to an analysis of reliability, validity, and internal consistency in 30 Achilles tendinopathy patients and 79 asymptomatic people. Concurrent validity was tested against a generic tendon grading system (Percy and Conochie) and against a classification system for the effect of pain on athletic performance (Curwin and Stanish). Results The "advisory committee" determined the VISA-A-G questionnaire as been translated "acceptable". The VISA-A-G questionnaire showed moderate to excellent test-retest reliability (ICC = 0.60 to 0.97). Concurrent validity showed good coherence when correlated with the grading system of Curwin and Stanish (rho = -0.95) and for the Percy and Conochie grade of severity (rho 0.95). Internal consistency (Cronbach's alpha) for the total VISA-A-G scores of the patients was calculated to be 0.737. Conclusion The VISA-A questionnaire was successfully cross-cultural adapted and validated for use in German speaking populations. The psychometric properties of the VISA-A-G questionnaire are similar to those of the original English version. It therefore can be recommended as a sufficiently robust tool for future measuring clinical severity of Achilles tendinopathy in German speaking patients. PMID:19878572
Document co-citation analysis to enhance transdisciplinary research
Trujillo, Caleb M.; Long, Tammy M.
2018-01-01
Specialized and emerging fields of research infrequently cross disciplinary boundaries and would benefit from frameworks, methods, and materials informed by other fields. Document co-citation analysis, a method developed by bibliometric research, is demonstrated as a way to help identify key literature for cross-disciplinary ideas. To illustrate the method in a useful context, we mapped peer-recognized scholarship related to systems thinking. In addition, three procedures for validation of co-citation networks are proposed and implemented. This method may be useful for strategically selecting information that can build consilience about ideas and constructs that are relevant across a range of disciplines. PMID:29308433
Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini
2014-01-01
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
The High School & Beyond Data Set: Academic Self-Concept Measures.
ERIC Educational Resources Information Center
Strein, William
A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…
Validity of Eye Movement Methods and Indices for Capturing Semantic (Associative) Priming Effects
ERIC Educational Resources Information Center
Odekar, Anshula; Hallowell, Brooke; Kruse, Hans; Moates, Danny; Lee, Chao-Yang
2009-01-01
Purpose: The purpose of this investigation was to evaluate the usefulness of eye movement methods and indices as a tool for studying priming effects by verifying whether eye movement indices capture semantic (associative) priming effects in a visual cross-format (written word to semantically related picture) priming paradigm. Method: In the…
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…
International Harmonization and Cooperation in the Validation of Alternative Methods.
Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie
The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.
NASA Astrophysics Data System (ADS)
Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando
2017-06-01
Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.
Baldwin, Carol M.; Choi, Myunghan; McClain, Darya Bonds; Celaya, Alma; Quan, Stuart F.
2012-01-01
Study Objectives: To translate, back-translate and cross-language validate (English/Spanish) the Sleep Heart Health Study Sleep Habits Questionnaire for use with Spanish-speakers in clinical and research settings. Methods: Following rigorous translation and back-translation, this cross-sectional cross-language validation study recruited bilingual participants from academic, clinic, and community-based settings (N = 50; 52% women; mean age 38.8 ± 12 years; 90% of Mexican heritage). Participants completed English and Spanish versions of the Sleep Habits Questionnaire, the Epworth Sleepiness Scale, and the Acculturation Rating Scale for Mexican Americans II one week apart in randomized order. Psychometric properties were assessed, including internal consistency, convergent validity, scale equivalence, language version intercorrelations, and exploratory factor analysis using PASW (Version18) software. Grade level readability of the sleep measure was evaluated. Results: All sleep categories (duration, snoring, apnea, insomnia symptoms, other sleep symptoms, sleep disruptors, restless legs syndrome) showed Cronbach α, Spearman-Brown coefficients and intercorrelations ≥ 0.700, suggesting robust internal consistency, correlation, and agreement between language versions. The Epworth correlated significantly with snoring, apnea, sleep symptoms, restless legs, and sleep disruptors) on both versions, supporting convergent validity. Items loaded on 4 factors accounted for 68% and 67% of the variance on the English and Spanish versions, respectively. Conclusions: The Spanish-language Sleep Habits Questionnaire demonstrates conceptual and content equivalency. It has appropriate measurement properties and should be useful for assessing sleep health in community-based clinics and intervention studies among Spanish-speaking Mexican Americans. Both language versions showed readability at the fifth grade level. Further testing is needed with larger samples. Citation: Baldwin CM; Choi M; McClain DB; Celaya A; Quan SF. Spanish translation and cross-language validation of a Sleep Habits Questionnaire for use in clinical and research settings. J Clin Sleep Med 2012;8(2):137-146. PMID:22505858
Macias, Nayeli; Alemán-Mateo, Heliodoro; Esparza-Romero, Julián; Valencia, Mauro E
2007-01-01
Background The study of body composition in specific populations by techniques such as bio-impedance analysis (BIA) requires validation based on standard reference methods. The aim of this study was to develop and cross-validate a predictive equation for bioelectrical impedance using air displacement plethysmography (ADP) as standard method to measure body composition in Mexican adult men and women. Methods This study included 155 male and female subjects from northern Mexico, 20–50 years of age, from low, middle, and upper income levels. Body composition was measured by ADP. Body weight (BW, kg) and height (Ht, cm) were obtained by standard anthropometric techniques. Resistance, R (ohms) and reactance, Xc (ohms) were also measured. A random-split method was used to obtain two samples: one was used to derive the equation by the "all possible regressions" procedure and was cross-validated in the other sample to test predicted versus measured values of fat-free mass (FFM). Results and Discussion The final model was: FFM (kg) = 0.7374 * (Ht2 /R) + 0.1763 * (BW) - 0.1773 * (Age) + 0.1198 * (Xc) - 2.4658. R2 was 0.97; the square root of the mean square error (SRMSE) was 1.99 kg, and the pure error (PE) was 2.96. There was no difference between FFM predicted by the new equation (48.57 ± 10.9 kg) and that measured by ADP (48.43 ± 11.3 kg). The new equation did not differ from the line of identity, had a high R2 and a low SRMSE, and showed no significant bias (0.87 ± 2.84 kg). Conclusion The new bioelectrical impedance equation based on the two-compartment model (2C) was accurate, precise, and free of bias. This equation can be used to assess body composition and nutritional status in populations similar in anthropometric and physical characteristics to this sample. PMID:17697388
Evaluation and comparison of predictive individual-level general surrogates.
Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth
2018-07-01
An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
ERIC Educational Resources Information Center
Balducci, Cristian; Mnich, Eva; McKee, Kevin J.; Lamura, Giovanni; Beckmann, Anke; Krevers, Barbro; Wojszel, Z. Beata; Nolan, Mike; Prouskas, Constantinos; Bien, Barbara; Oberg, Birgitta
2008-01-01
Purpose: The present study attempts to further validate the COPE Index on a large sample of carers drawn from six European countries. Design and Methods: We used a cross-sectional survey, with approximately 1,000 carers recruited in each of six countries by means of a common standard evaluation protocol. Our saturation recruitment of a designated…
Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya
2014-01-01
We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce comparable data. In addition, results from analyses of specimens from the German External Quality Assessment Scheme (G-EQUAS) suggested that the GC-FPD method produced accurate results that can be reasonably compared to other studies. Copyright © 2013 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
NASA Astrophysics Data System (ADS)
Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia
2015-02-01
Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.
sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Heng; Ye, Hao; Ng, Hui Wen
Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less
Lanza, Ian R.; Bhagra, Sumit; Nair, K. Sreekumaran; Port, John D.
2011-01-01
Purpose To cross-validate skeletal muscle oxidative capacity measured by 31P-MRS with in vitro measurements of oxidative capacityin mitochondria isolated from muscle biopsies of the same muscle group in 18 healthy adults. Materials and Methods Oxidative capacity in vivo was determined from PCr recovery kinetics following a 30s maximal isometric knee extension. State 3 respiration was measured in isolated mitochondria using high-resolution respirometry. A second cohort of 10 individuals underwent two 31P-MRS testing sessions to assess the test-retest reproducibility of the method. Results Overall, the in vivo and in vitro methods were well-correlated (r = 0.66 –0.72) and showed good agreement by Bland Altman plots. Excellent reproducibility was observed for the PCr recovery rate constant (CV = 4.6%, ICC = 0.85) and calculated oxidative capacity (CV = 3.4%, ICC = 0.83). Conclusion These results indicate that 31P-MRS corresponds well with gold-standard in vitro measurements and is highly reproducible. PMID:22006551
sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides
Luo, Heng; Ye, Hao; Ng, Hui Wen; Sakkiah, Sugunadevi; Mendrick, Donna L.; Hong, Huixiao
2016-01-01
Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. This algorithm can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system. PMID:27558848
sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides
Luo, Heng; Ye, Hao; Ng, Hui Wen; ...
2016-08-25
Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less
A cross-domain communication resource scheduling method for grid-enabled communication networks
NASA Astrophysics Data System (ADS)
Zheng, Xiangquan; Wen, Xiang; Zhang, Yongding
2011-10-01
To support a wide range of different grid applications in environments where various heterogeneous communication networks coexist, it is important to enable advanced capabilities in on-demand and dynamical integration and efficient co-share with cross-domain heterogeneous communication resource, thus providing communication services which are impossible for single communication resource to afford. Based on plug-and-play co-share and soft integration with communication resource, Grid-enabled communication network is flexibly built up to provide on-demand communication services for gird applications with various requirements on quality of service. Based on the analysis of joint job and communication resource scheduling in grid-enabled communication networks (GECN), this paper presents a cross multi-domain communication resource cooperatively scheduling method and describes the main processes such as traffic requirement resolution for communication services, cross multi-domain negotiation on communication resource, on-demand communication resource scheduling, and so on. The presented method is to afford communication service capability to cross-domain traffic delivery in GECNs. Further research work towards validation and implement of the presented method is pointed out at last.
The use of the FACT-H&N (v4) in clinical settings within a developing country: a mixed method study.
Bilal, Sobia; Doss, Jennifer Geraldine; Rogers, Simon N
2014-12-01
In the last decade there has been an increasing awareness about 'quality of life' (QOL) of cancer survivors in developing countries. The study aimed to cross-culturally adapt and validate the FACT-H&N (v4) in Urdu language for Pakistani head and neck cancer patients. In this study the 'same language adaptation method' was used. Cognitive debriefing through in-depth interviews of 25 patients to assess semantic, operational and conceptual equivalence was done. The validation phase included 50 patients to evaluate the psychometric properties. The translated FACT-H&N was easily comprehended (100%). Cronbach's alpha for FACT-G subscales ranged from 0.726 - 0.969. The head and neck subscale and Pakistani questions subscale showed low internal consistency (0.426 and 0.541 respectively). Instrument demonstrated known-group validity in differentiating patients of different clinical stages, treatment status and tumor sites (p < 0.05). Most FACT summary scales correlated strongly with each other (r > 0.75) and showed convergent validity (r > 0.90), with little discriminant validity. Factor analysis revealed 6 factors explaining 85.1% of the total variance with very good (>0.8) Kaiser-Meyer-Olkin and highly significant Bartlett's Test of Sphericity (p < 0.001). The cross-culturally adapted FACT-H&N into Urdu language showed adequate reliability and validity to be incorporated in Pakistani clinical settings for head and neck cancer patients. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Bass, Judith K; Ryder, Robert W; Lammers, Marie-Christine; Mukaba, Thibaut N; Bolton, Paul A
2008-12-01
To determine if a post-partum depression syndrome exists among mothers in Kinshasa, Democratic Republic of Congo, by adapting and validating standard screening instruments. Using qualitative interviewing techniques, we interviewed a convenience sample of 80 women living in a large peri-urban community to better understand local conceptions of mental illness. We used this information to adapt two standard depression screeners, the Edinburgh Post-partum Depression Scale and the Hopkins Symptom Checklist. In a subsequent quantitative study, we identified another 133 women with and without the local depression syndrome and used this information to validate the adapted screening instruments. Based on the qualitative data, we found a local syndrome that closely approximates the Western model of major depressive disorder. The women we interviewed, representative of the local populace, considered this an important syndrome among new mothers because it negatively affects women and their young children. Women (n = 41) identified as suffering from this syndrome had statistically significantly higher depression severity scores on both adapted screeners than women identified as not having this syndrome (n = 20; P < 0.0001). When it is unclear or unknown if Western models of psychopathology are appropriate for use in the local context, these models must be validated to ensure cross-cultural applicability. Using a mixed-methods approach we found a local syndrome similar to depression and validated instruments to screen for this disorder. As the importance of compromised mental health in developing world populations becomes recognized, the methods described in this report will be useful more widely.
Stanifer, John W.; Karia, Francis; Voils, Corrine I.; Turner, Elizabeth L.; Maro, Venance; Shimbi, Dionis; Kilawe, Humphrey; Lazaro, Matayo; Patel, Uptal D.
2015-01-01
Introduction Non-communicable diseases are a growing global burden, and structured surveys can identify critical gaps to address this epidemic. In sub-Saharan Africa, there are very few well-tested survey instruments measuring population attributes related to non-communicable diseases. To meet this need, we have developed and validated the first instrument evaluating knowledge, attitudes and practices pertaining to chronic kidney disease in a Swahili-speaking population. Methods and Results Between December 2013 and June 2014, we conducted a four-stage, mixed-methods study among adults from the general population of northern Tanzania. In stage 1, the survey instrument was constructed in English by a group of cross-cultural experts from multiple disciplines and through content analysis of focus group discussions to ensure local significance. Following translation, in stage 2, we piloted the survey through cognitive and structured interviews, and in stage 3, in order to obtain initial evidence of reliability and construct validity, we recruited and then administered the instrument to a random sample of 606 adults. In stage 4, we conducted analyses to establish test-retest reliability and known-groups validity which was informed by thematic analysis of the qualitative data in stages 1 and 2. The final version consisted of 25 items divided into three conceptual domains: knowledge, attitudes and practices. Each item demonstrated excellent test-retest reliability with established content and construct validity. Conclusions We have developed a reliable and valid cross-cultural survey instrument designed to measure knowledge, attitudes and practices of chronic kidney disease in a Swahili-speaking population of Northern Tanzania. This instrument may be valuable for addressing gaps in non-communicable diseases care by understanding preferences regarding healthcare, formulating educational initiatives, and directing development of chronic disease management programs that incorporate chronic kidney disease across sub-Saharan Africa. PMID:25811781
Space sickness predictors suggest fluid shift involvement and possible countermeasures
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Moseley, E. C.; Charles, J. B.
1992-01-01
Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.
2D-QSAR and 3D-QSAR Analyses for EGFR Inhibitors
Zhao, Manman; Zheng, Linfeng; Qiu, Chun
2017-01-01
Epidermal growth factor receptor (EGFR) is an important target for cancer therapy. In this study, EGFR inhibitors were investigated to build a two-dimensional quantitative structure-activity relationship (2D-QSAR) model and a three-dimensional quantitative structure-activity relationship (3D-QSAR) model. In the 2D-QSAR model, the support vector machine (SVM) classifier combined with the feature selection method was applied to predict whether a compound was an EGFR inhibitor. As a result, the prediction accuracy of the 2D-QSAR model was 98.99% by using tenfold cross-validation test and 97.67% by using independent set test. Then, in the 3D-QSAR model, the model with q2 = 0.565 (cross-validated correlation coefficient) and r2 = 0.888 (non-cross-validated correlation coefficient) was built to predict the activity of EGFR inhibitors. The mean absolute error (MAE) of the training set and test set was 0.308 log units and 0.526 log units, respectively. In addition, molecular docking was also employed to investigate the interaction between EGFR inhibitors and EGFR. PMID:28630865
Online Cross-Validation-Based Ensemble Learning
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-01-01
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419
Gu, Xiang; Liu, Cong-Jian; Wei, Jian-Jie
2017-11-13
Given that the pathogenesis of ankylosing spondylitis (AS) remains unclear, the aim of this study was to detect the potentially functional pathway cross-talk in AS to further reveal the pathogenesis of this disease. Using microarray profile of AS and biological pathways as study objects, Monte Carlo cross-validation method was used to identify the significant pathway cross-talks. In the process of Monte Carlo cross-validation, all steps were iterated 50 times. For each run, detection of differentially expressed genes (DEGs) between two groups was conducted. The extraction of the potential disrupted pathways enriched by DEGs was then implemented. Subsequently, we established a discriminating score (DS) for each pathway pair according to the distribution of gene expression levels. After that, we utilized random forest (RF) classification model to screen out the top 10 paired pathways with the highest area under the curve (AUCs), which was computed using 10-fold cross-validation approach. After 50 bootstrap, the best pairs of pathways were identified. According to their AUC values, the pair of pathways, antigen presentation pathway and fMLP signaling in neutrophils, achieved the best AUC value of 1.000, which indicated that this pathway cross-talk could distinguish AS patients from normal subjects. Moreover, the paired pathways of SAPK/JNK signaling and mitochondrial dysfunction were involved in 5 bootstraps. Two paired pathways (antigen presentation pathway and fMLP signaling in neutrophil, as well as SAPK/JNK signaling and mitochondrial dysfunction) can accurately distinguish AS and control samples. These paired pathways may be helpful to identify patients with AS for early intervention.
Rodrigues, Marcelo F; Michel-Crosato, Edgard; Cardoso, Jefferson R; Traebert, Jefferson
2009-06-01
Cross-cultural translation and psychometric testing. To translate and cross-culturally adapt the Quebec Back Pain Disability Scale (QDS) to Brazilian Portuguese and to examine its validity and reliability. Current literature shows the need to adopt reliable and internationally standardized methods for the analysis of low back pain. To our knowledge, this specific questionnaire has not been translated and validated for Portuguese-speaking patients. The translation and cross-cultural adaptation of the QDS were developed in agreement with internationally recommended methodology, and the resulting product was evaluated in this study with 54 consecutive patients. Internal consistency was obtained through Cronbach's alpha; reliability was estimated through the intraclass correlation coefficient and the Bland and Altman agreement (d = mean difference). Validity was determined by correlating the scores of the Brazil-QDS with the Brazilian version of the Roland-Morris Questionnaire and Visual Analogue Pain Scale by means of the Spearman rank correlation coefficient. The internal consistency obtained was excellent (Cronbach's alpha = 0.97). Intraobserver and interobserver reliability were considered strong (ICC = 0.93-d = 0.68 and 0.96-d = 0.57, respectively). The correlation with Brazilian Roland-Morris Questionnaire and with the Visual Analogue Scale was high (r = 0.857; r = 0.758, respectively). The data showed that the process of translation and cross-cultural adaptation were successful and that the adapted instrument demonstrated excellent psychometric properties.
Assessment of the MPACT Resonance Data Generation Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L.
Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less
Correction of sampling bias in a cross-sectional study of post-surgical complications.
Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva
2013-06-30
Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
NASA Astrophysics Data System (ADS)
Breden, Maxime; Castelli, Roberto
2018-05-01
In this paper, we present and apply a computer-assisted method to study steady states of a triangular cross-diffusion system. Our approach consist in an a posteriori validation procedure, that is based on using a fixed point argument around a numerically computed solution, in the spirit of the Newton-Kantorovich theorem. It allows to prove the existence of various non homogeneous steady states for different parameter values. In some situations, we obtain as many as 13 coexisting steady states. We also apply the a posteriori validation procedure to study the linear stability of the obtained steady states, proving that many of them are in fact unstable.
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
Tomaschewski-Barlem, Jamila Geri; Lunardi, Valéria Lerch; Barlem, Edison Luiz Devos; da Silveira, Rosemary Silva; Dalmolin, Graziele de Lima; Ramos, Aline Marcelino
2015-01-01
Abstract Objective: to adapt culturally and validate the Protective Nursing Advocacy Scale for Brazilian nurses. Method: methodological study carried out with 153 nurses from two hospitals in the South region of Brazil, one public and the other philanthropic. The cross-cultural adaptation of the Protective Nursing Advocacy Scale was performed according to international standards, and its validation was carried out for use in the Brazilian context, by means of factor analysis and Cronbach's alpha as measure of internal consistency. Results: by means of evaluation by a committee of experts and application of pre-test, face validity and content validity of the instrument were considered satisfactory. From the factor analysis, five constructs were identified: negative implications of the advocacy practice, advocacy actions, facilitators of the advocacy practice, perceptions that favor practice advocacy and barriers to advocacy practice. The instrument showed satisfactory internal consistency, with Cronbach's alpha values ranging from 0.70 to 0.87. Conclusion: it was concluded that the Protective Nursing Advocacy Scale - Brazilian version, is a valid and reliable instrument for use in the evaluation of beliefs and actions of health advocacy, performed by Brazilian nurses in their professional practice environment. PMID:26444169
Naghdi, Soofia; Ansari, Noureddin Nakhostin; Raji, Parvin; Shamili, Aryan; Amini, Malek; Hasson, Scott
2016-01-01
To translate and cross-culturally adapt the Functional Independence Measure (FIM) into the Persian language and to test the reliability and validity of the Persian FIM (PFIM) in patients with stroke. In this cross-sectional study carried out in an outpatient stroke rehabilitation center, 40 patients with stroke (mean age 60 years) were participated. A standard forward-backward translation method and expert panel validation was followed to develop the PFIM. Two experienced occupational therapists (OTs) assessed the patients independently in all items of the PFIM in a single session for inter-rater reliability. One of the OTs reassessed the patients after 1 week for intra-rater reliability. There were no floor or ceiling effects for the PFIM. Excellent inter-rater and intra-rater reliability was noted for the PFIM total score, motor and cognitive subscales (ICC(agreement)0.88-0.98). According to the Bland-Altman agreement analysis, there was no systematic bias between raters and within raters. The internal consistency of the PFIM was with Cronbach's alpha from 0.70 to 0.96. The principal component analysis with varimax rotation indicated a three-factor structure: (1) self-care and mobility; (2) sphincter control and (3) cognitive that jointly accounted for 74.8% of the total variance. Construct validity was supported by a significant Pearson correlation between the PFIM and the Persian Barthel Index (r = 0.95; p < 0.001). The PFIM is a highly reliable and valid instrument for measuring functional status of Persian patients with stroke. The Functional Independence Measure (FIM) is an outcome measure for disability based on the International Classification of Functioning, Disability and Health (ICF). The FIM was cross-culturally adapted and validated into Persian language. The Persian version of the FIM (PFIM) is reliable and valid for assessing functional status of patients with stroke. The PFIM can be used in Persian speaking countries to assess the limitations in activities of daily living of patients with stroke.
Saloheimo, T; González, S A; Erkkola, M; Milauskas, D M; Meisel, J D; Champagne, C M; Tudor-Locke, C; Sarmiento, O; Katzmarzyk, P T; Fogelholm, M
2015-01-01
Objective: The main aim of this study was to assess the reliability and validity of a food frequency questionnaire with 23 food groups (I-FFQ) among a sample of 9–11-year-old children from three different countries that differ on economical development and income distribution, and to assess differences between country sites. Furthermore, we assessed factors associated with I-FFQ's performance. Methods: This was an ancillary study of the International Study of Childhood Obesity, Lifestyle and the Environment. Reliability (n=321) and validity (n=282) components of this study had the same participants. Participation rates were 95% and 70%, respectively. Participants completed two I-FFQs with a mean interval of 4.9 weeks to assess reliability. A 3-day pre-coded food diary (PFD) was used as the reference method in the validity analyses. Wilcoxon signed-rank tests, intraclass correlation coefficients and cross-classifications were used to assess the reliability of I-FFQ. Spearman correlation coefficients, percentage difference and cross-classifications were used to assess the validity of I-FFQ. A logistic regression model was used to assess the relation of selected variables with the estimate of validity. Analyses based on information in the PFDs were performed to assess how participants interpreted food groups. Results: Reliability correlation coefficients ranged from 0.37 to 0.78 and gross misclassification for all food groups was <5%. Validity correlation coefficients were below 0.5 for 22/23 food groups, and they differed among country sites. For validity, gross misclassification was <5% for 22/23 food groups. Over- or underestimation did not appear for 19/23 food groups. Logistic regression showed that country of participation and parental education were associated (P⩽0.05) with the validity of I-FFQ. Analyses of children's interpretation of food groups suggested that the meaning of most food groups was understood by the children. Conclusion: I-FFQ is a moderately reliable method and its validity ranged from low to moderate, depending on food group and country site. PMID:27152180
ERIC Educational Resources Information Center
Lubben, James; Blozik, Eva; Gillmann, Gerhard; Iliffe, Steve; von Renteln-Kruse, Wolfgang; Beck, John C.; Stuck, Andreas E.
2006-01-01
Purpose: There is a need for valid and reliable short scales that can be used to assess social networks and social supports and to screen for social isolation in older persons. Design and Methods: The present study is a cross-national and cross-cultural evaluation of the performance of an abbreviated version of the Lubben Social Network Scale…
ERIC Educational Resources Information Center
Shahnazari-Dorcheh, Mohammadtaghi; Roshan, Saeed
2012-01-01
Due to the lack of span test for the use in language-specific and cross-language studies, this study provides L1 and L2 researchers with a reliable language-independent span test (math span test) for the measurement of working memory capacity. It also describes the development, validation, and scoring method of this test. This test included 70…
Bernhard, Gerda; Knibbe, Ronald A.; von Wolff, Alessa; Dingoyan, Demet; Schulz, Holger; Mösko, Mike
2015-01-01
Background Cultural competence of healthcare professionals (HCPs) is recognized as a strategy to reduce cultural disparities in healthcare. However, standardised, valid and reliable instruments to assess HCPs’ cultural competence are notably lacking. The present study aims to 1) identify the core components of cultural competence from a healthcare perspective, 2) to develop a self-report instrument to assess cultural competence of HCPs and 3) to evaluate the psychometric properties of the new instrument. Methods The conceptual model and initial item pool, which were applied to the cross-cultural competence instrument for the healthcare profession (CCCHP), were derived from an expert survey (n = 23), interviews with HCPs (n = 12), and a broad narrative review on assessment instruments and conceptual models of cultural competence. The item pool was reduced systematically, which resulted in a 59-item instrument. A sample of 336 psychologists, in advanced psychotherapeutic training, and 409 medical students participated, in order to evaluate the construct validity and reliability of the CCCHP. Results Construct validity was supported by principal component analysis, which led to a 32-item six-component solution with 50% of the total variance explained. The different dimensions of HCPs’ cultural competence are: Cross-Cultural Motivation/Curiosity, Cross-Cultural Attitudes, Cross-Cultural Skills, Cross-Cultural Knowledge/Awareness and Cross-Cultural Emotions/Empathy. For the total instrument, the internal consistency reliability was .87 and the dimension’s Cronbach’s α ranged from .54 to .84. The discriminating power of the CCCHP was indicated by statistically significant mean differences in CCCHP subscale scores between predefined groups. Conclusions The 32-item CCCHP exhibits acceptable psychometric properties, particularly content and construct validity to examine HCPs’ cultural competence. The CCCHP with its five dimensions offers a comprehensive assessment of HCPs’ cultural competence, and has the ability to distinguish between groups that are expected to differ in cultural competence. This instrument can foster professional development through systematic self-assessment and thus contributes to improve the quality of patient care. PMID:26641876
Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.
Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L
2017-01-01
A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.
Stetter, Markus G; Zeitler, Leo; Steinhaus, Adrian; Kroener, Karoline; Biljecki, Michelle; Schmid, Karl J
2016-01-01
Grain amaranths (Amaranthus spp.) have been cultivated for thousands of years in Central and South America. Their grains are of high nutritional value, but the low yield needs to be increased by selection of superior genotypes from genetically diverse breeding populations. Amaranths are adapted to harsh conditions and can be cultivated on marginal lands although little is known about their physiology. The development of controlled growing conditions and efficient crossing methods is important for research on and improvement of this ancient crop. Grain amaranth was domesticated in the Americas and is highly self-fertilizing with a large inflorescence consisting of thousands of very small flowers. We evaluated three different crossing methods (open pollination, hot water emasculation and hand emasculation) for their efficiency in amaranth and validated them with genetic markers. We identified cultivation conditions that allow an easy control of flowering time by day length manipulation and achieved flowering times of 4 weeks and generation times of 2 months. All three different crossing methods successfully produced hybrid F1 offspring, but with different success rates. Open pollination had the lowest (10%) and hand emasculation the highest success rate (74%). Hot water emasculation showed an intermediate success rate (26%) with a maximum of 94% success. It is simple to perform and suitable for a more large-scale production of hybrids. We further evaluated 11 single nucleotide polymorphism (SNP) markers and found that they were sufficient to validate all crosses of the genotypes used in this study for intra- and interspecific hybridizations. Despite its very small flowers, crosses in amaranth can be carried out efficiently and evaluated with inexpensive SNP markers. Suitable growth conditions strongly reduce the generation time and allow the control of plant height, flowering time, and seed production. In combination, this enables the rapid production of segregating populations which makes amaranth an attractive model for basic plant research but also facilitates further the improvement of this ancient crop by plant breeding.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.
Flow in curved ducts of varying cross-section
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Patel, V. C.
1992-07-01
Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.
An interlaboratory transfer of a multi-analyte assay between continents.
Georgiou, Alexandra; Dong, Kelly; Hughes, Stephen; Barfield, Matthew
2015-01-01
Alex has worked at GlaxoSmithKline for the past 15 years and currently works within the bioanalytical and toxicokinetic group in the United Kingdom. Alex's role in previous years has been the in-house support of preclinical and clinical bioanalysis, from method development through to sample analysis activities as well as acting as PI for GLP bioanalysis and toxicokinetics. For the past two years, Alex has applied this analytical and regulatory experience to focus on the outsourcing of preclinical bioanalysis, toxicokinetics and clinical bioanalysis, working closely with multiple bioanalytical and in-life CRO partners worldwide. Alex works to support DMPK and Safety Assessment outsourcing activities for GSK across multiple therapeutic areas, from the first GLP study through to late stage clinical PK studies. Transfer and cross-validation of an existing analytical assay between a laboratory providing current analytical support, and a laboratory needed for new or additional support, can present the bioanalyst with numerous challenges. These challenges can be technical or logistical in nature and may prove to be significant when transferring an assay between laboratories in different continents. Part of GlaxoSmithKline's strategy to improve confidence in providing quality data, is to cross-validate between laboratories. If the cross-validation fails predefined acceptance criteria, then a subsequent investigation would follow. This may also prove to be challenging. The importance of thorough planning and good communication throughout assay transfer, cross-validation and any subsequent investigations is illustrated in this case study.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Choi, Bongsam
2018-01-01
[Purpose] This study aimed to cross-cultural adapt and validate the Korean version of an physical activity measure (K-PAM) for community-dwelling elderly. [Subjects and Methods] One hundred and thirty eight community-dwelling elderlies, 32 males and 106 female, participated in the study. All participants were asked to fill out a fifty-one item questionnaire measuring perceived difficulty in the activities of daily living (ADL) for the elderly. One-parameter model of item response theory (Rasch analysis) was applied to determine the construct validity and to inspect item-level psychometric properties of 51 ADL items of the K-PAM. [Results] Person separation reliability (analogous to Cronbach's alpha) for internal consistency was ranging 0.93 to 0.94. A total of 16 items was misfit to the Rasch model. After misfit item deletion, 35 ADL items of the K-PAM were placed in an empirically meaningful hierarchy from easy to hard. The item-person map analysis delineated that the item difficulty was well matched for the elderlies with moderate and low ability except for high ceilings. [Conclusion] Cross-cultural adapted K-PAM was shown to be sufficient for establishing construct validity and stable psychometric properties confirmed by person separation reliability and fit statistics.
Nagasaka, Kei; Mizuno, Koji; Thomson, Robert
2018-03-26
For occupant protection, it is important to understand how a car's deceleration time history in crashes can be designed using efficient of energy absorption by a car body's structure. In a previous paper, the authors proposed an energy derivative method to determine each structural component's contribution to the longitudinal deceleration of a car passenger compartment in crashes. In this study, this method was extended to 2 dimensions in order to analyze various crash test conditions. The contribution of each structure estimated from the energy derivative method was compared to that from a conventional finite element (FE) analysis method using cross-sectional forces. A 2-dimensional energy derivative method was established. A simple FE model with a structural column connected to a rigid body was used to confirm the validity of this method and to compare with the result of cross-sectional forces determined using conventional analysis. Applying this method to a full-width frontal impact simulation of a car FE model, the contribution and the cross-sectional forces of the front rails were compared. In addition, this method was applied to a pedestrian headform FE simulation in order to determine the influence of the structural and inertia forces of the hood structures on the deceleration of the headform undergoing planar motion. In an oblique impact of the simple column and rigid body model, the sum of the contributions of each part agrees with the rigid body deceleration, which indicates the validity of the 2-dimensional energy derivative method. Using the energy derivative method, it was observed that each part of the column contributes to the deceleration of the rigid body by collapsing in the sequence from front to rear, whereas the cross-sectional force at the rear of the column cannot detect the continuous collapse. In the full-width impact of a car, the contributions of the front rails estimated in the energy derivative method was smaller than that using the cross-sectional forces at the rear end of the front rails due to the deformation of the passenger compartment. For a pedestrian headform impact, the inertial and structural forces of the hood contributed to peaks of the headform deceleration in the initial and latter phases, respectively. Using the 2-dimensional energy derivative method, it is possible to analyze an oblique impact or a pedestrian headform impact with large rotations. This method has advantages compared to the conventional approach using cross-sectional forces because the contribution of each component to system deceleration can be determined.
Using 171,173Yb(d,p) to benchmark a surrogate reaction for neutron capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatarik, R; Bersntein, L; Burke, J
2008-08-08
Neutron capture cross sections on unstable nuclei are important for many applications in nuclear structure and astrophysics. Measuring these cross sections directly is a major challenge and often impossible. An indirect approach for measuring these cross sections is the surrogate reaction method, which makes it possible to relate the desired cross section to a cross section of an alternate reaction that proceeds through the same compound nucleus. To benchmark the validity of using the (d,p{gamma}) reaction as a surrogate for (n,{gamma}), the {sup 171,173}Yb(d,p{gamma}) reactions were measured with the goal to reproduce the known [1] neutron capture cross section ratiosmore » of these nuclei.« less
Validation of an immortalized human (hBMEC) in vitro blood-brain barrier model.
Eigenmann, Daniela Elisabeth; Jähne, Evelyn Andrea; Smieško, Martin; Hamburger, Matthias; Oufir, Mouhssin
2016-03-01
We recently established and optimized an immortalized human in vitro blood-brain barrier (BBB) model based on the hBMEC cell line. In the present work, we validated this mono-culture 24-well model with a representative series of drug substances which are known to cross or not to cross the BBB. For each individual compound, a quantitative UHPLC-MS/MS method in Ringer HEPES buffer was developed and validated according to current regulatory guidelines, with respect to selectivity, precision, and reliability. Various biological and analytical challenges were met during method validation, highlighting the importance of careful method development. The positive controls antipyrine, caffeine, diazepam, and propranolol showed mean endothelial permeability coefficients (P e) in the range of 17-70 × 10(-6) cm/s, indicating moderate to high BBB permeability when compared to the barrier integrity marker sodium fluorescein (mean P e 3-5 × 10(-6) cm/s). The negative controls atenolol, cimetidine, and vinblastine showed mean P e values < 10 × 10(-6) cm/s, suggesting low permeability. In silico calculations were in agreement with in vitro data. With the exception of quinidine (P-glycoprotein inhibitor and substrate), BBB permeability of all control compounds was correctly predicted by this new, easy, and fast to set up human in vitro BBB model. Addition of retinoic acid and puromycin did not increase transendothelial electrical resistance (TEER) values of the BBB model.
[Prediction equations for fat percentage from body circumferences in prepubescent children].
Gómez Campos, Rossana; De Marco, Ademir; de Arruda, Miguel; Martínez Salazar, Cristian; Margarita Salazar, Ciria; Valgas, Carmen; Fuentes, José Damián; Cossio-Bolaños, Marco Antonio
2013-01-01
The analysis of body composition through direct and indirect methods allows the study of the various components of the human body, becoming the central hub for assessing nutritional status. The objective of the study was to develop equations for predicting body fat% from circumferential body arm, waist and calf and propose percentiles to diagnose the nutritional status of school children of both sexes aged 4-10 years. We selected intentionally (non-probabilistic) 515 children, 261 children and 254 being girls belonging to Program interaction and development of children and adolescents from the State University of Campinas (Sao Paulo, Brazil). Anthropometric variables were evaluated for weight, height, triceps and subscapular skinfolds and body circumferences of arm, waist and calf, and the% fat determined by the equation proposed by Boileau, Lohman and Slaughter (1985). Through regression method 2 were generated equations to predict the percentage of fat from the body circumferences, the equations 1 and 2 were validated by cross validation method. The equations showed high predictive values ranging with a R² = 64-69%. In cross validation between the criterion and the regression equation proposed no significant difference (p > 0.05) and there was a high level of agreement to a 95% CI. It is concluded that the proposals are validated and shown as an alternative to assess the percentage of fat in school children of both sexes aged 4-10 years in the region of Campinas, SP (Brazil). Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
NASA Astrophysics Data System (ADS)
Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa
2018-03-01
In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.
Large-scale collision cross-section profiling on a travelling wave ion mobility mass spectrometer
Lietz, Christopher B.; Yu, Qing; Li, Lingjun
2014-01-01
Ion mobility (IM) is a gas-phase electrophoretic method that separates ions according to charge and ion-neutral collision cross-section (CCS). Herein, we attempt to apply a travelling wave (TW) IM polyalanine calibration method to shotgun proteomics and create a large peptide CCS database. Mass spectrometry methods that utilize IM, such as HDMSE, often use high transmission voltages for sensitive analysis. However, polyalanine calibration has only been demonstrated with low voltage transmission used to prevent gas-phase activation. If polyalanine ions change conformation under higher transmission voltages used for HDMSE, the calibration may no longer be valid. Thus, we aimed to characterize the accuracy of calibration and CCS measurement under high transmission voltages on a TW IM instrument using the polyalanine calibration method and found that the additional error was not significant. We also evaluated the potential error introduced by liquid chromatography (LC)-HDMSE analysis, and found it to be insignificant as well, validating the calibration method. Finally, we demonstrated the utility of building a large-population peptide CCS database by investigating the effects of terminal lysine position, via LysC or LysN digestion, on the formation of two structural sub-families formed by triply charged ions. PMID:24845359
Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C
2014-01-01
Classification methods used in microarray studies for gene expression are diverse in the way they deal with the underlying complexity of the data, as well as in the technique used to build the classification model. The MAQC II study on cancer classification problems has found that performance was affected by factors such as the classification algorithm, cross validation method, number of genes, and gene selection method. In this paper, we study the hypothesis that the disease under study significantly determines which method is optimal, and that additionally sample size, class imbalance, type of medical question (diagnostic, prognostic or treatment response), and microarray platform are potentially influential. A systematic literature review was used to extract the information from 48 published articles on non-cancer microarray classification studies. The impact of the various factors on the reported classification accuracy was analyzed through random-intercept logistic regression. The type of medical question and method of cross validation dominated the explained variation in accuracy among studies, followed by disease category and microarray platform. In total, 42% of the between study variation was explained by all the study specific and problem specific factors that we studied together.
[Gaussian process regression and its application in near-infrared spectroscopy analysis].
Feng, Ai-Ming; Fang, Li-Min; Lin, Min
2011-06-01
Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.
Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin
2014-01-01
Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874
2013-01-01
Chemical cross-linking of proteins combined with mass spectrometry provides an attractive and novel method for the analysis of native protein structures and protein complexes. Analysis of the data however is complex. Only a small number of cross-linked peptides are produced during sample preparation and must be identified against a background of more abundant native peptides. To facilitate the search and identification of cross-linked peptides, we have developed a novel software suite, named Hekate. Hekate is a suite of tools that address the challenges involved in analyzing protein cross-linking experiments when combined with mass spectrometry. The software is an integrated pipeline for the automation of the data analysis workflow and provides a novel scoring system based on principles of linear peptide analysis. In addition, it provides a tool for the visualization of identified cross-links using three-dimensional models, which is particularly useful when combining chemical cross-linking with other structural techniques. Hekate was validated by the comparative analysis of cytochrome c (bovine heart) against previously reported data.1 Further validation was carried out on known structural elements of DNA polymerase III, the catalytic α-subunit of the Escherichia coli DNA replisome along with new insight into the previously uncharacterized C-terminal domain of the protein. PMID:24010795
Landscape scale estimation of soil carbon stock using 3D modelling.
Veronesi, F; Corstanje, R; Mayr, T
2014-07-15
Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. Copyright © 2014 Elsevier B.V. All rights reserved.
Some New Mathematical Methods for Variational Objective Analysis
NASA Technical Reports Server (NTRS)
Wahba, G.; Johnson, D. R.
1984-01-01
New and/or improved variational methods for simultaneously combining forecast, heterogeneous observational data, a priori climatology, and physics to obtain improved estimates of the initial state of the atmosphere for the purpose of numerical weather prediction are developed. Cross validated spline methods are applied to atmospheric data for the purpose of improved description and analysis of atmospheric phenomena such as the tropopause and frontal boundary surfaces.
Divya, O; Mishra, Ashok K
2007-05-29
Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.
Design Methods for Load-bearing Elements from Crosslaminated Timber
NASA Astrophysics Data System (ADS)
Vilguts, A.; Serdjuks, D.; Goremikins, V.
2015-11-01
Cross-laminated timber is an environmentally friendly material, which possesses a decreased level of anisotropy in comparison with the solid and glued timber. Cross-laminated timber could be used for load-bearing walls and slabs of multi-storey timber buildings as well as decking structures of pedestrian and road bridges. Design methods of cross-laminated timber elements subjected to bending and compression with bending were considered. The presented methods were experimentally validated and verified by FEM. Two cross-laminated timber slabs were tested at the action of static load. Pine wood was chosen as a board's material. Freely supported beam with the span equal to 1.9 m, which was loaded by the uniformly distributed load, was a design scheme of the considered plates. The width of the plates was equal to 1 m. The considered cross-laminated timber plates were analysed by FEM method. The comparison of stresses acting in the edge fibres of the plate and the maximum vertical displacements shows that both considered methods can be used for engineering calculations. The difference between the results obtained experimentally and analytically is within the limits from 2 to 31%. The difference in results obtained by effective strength and stiffness and transformed sections methods was not significant.
Park, So Jeong; An, Soo Min; Kim, Se Hyun
2013-03-01
(1) To translate original English Cancer Therapy Satisfaction Questionnaire (CTSQ) into Korean and perform validation, (2) to compare CTSQ domains of expectations of therapy (ET), feelings about side effects (FSE), and satisfaction with therapy (SWT) by cancer therapy type. Cross-cultural adaptation was performed according to guidelines: translation, back translation, focus-group, and field test. We performed validation with internal consistency by Cronbach's alpha and construct validity by exploratory factor analysis (EFA) with varimax rotation method. We compared each CTSQ domain between traditional Korean Medicine (TKM) and integrative cancer therapy (ICT) of combining western and TKM by two-sample t test. Cross-cultural adaptation produced no major modifications in the items and domains. A total of 102 outpatients were participated. Mean age was 51.9 ± 12.4. Most were stage 4 (74.4 %) cancer. Mean scores of ET, FSE, and SWT were 81.2 ± 15.7, 79.5 ± 22.9, and 75.7 ± 14.8, respectively. Cronbach's alpha of ET, FSE, and SWT were 0.86, 0.78, and 0.74, respectively. EFA loaded items on the three domains, which is very close to that of the original CTSQ. ET and SWT was similar, but FSE was significantly higher in TKM than ICT (87.5 ± 19.3 vs. 74.9 ± 23.5; p = 0.0054). Cross-cultural adaptation was successful, and the adapted Korean CTSQ demonstrated good internal consistency and construct validity. Similar expectation and satisfaction was shown between the two types of therapy, but patient's reported feelings about side effects was significantly lower in patients receiving TKM than receiving ICT. Korean version of CTSQ can be used to evaluate Korean cancer patient's experiences receiving various cancer therapy types.
Arimura, Tatsuyuki; Hosoi, Masako; Tsukiyama, Yoshihiro; Yoshida, Toshiyuki; Fujiwara, Daiki; Tanaka, Masanori; Tamura, Ryuichi; Nakashima, Yasunori; Sudo, Nobuyuki; Kubo, Chiharu
2012-04-01
The present study aimed to develop a Japanese version of the Short-Form McGill Pain Questionnaire (SF-MPQ-J) that focuses on cross-culturally equivalence to the original English version and to test its reliability and validity. Cross-sectional design. In study 1, SF-MPQ was translated and adapted into Japanese. It included construction of response scales equivalent to the original using a variation of the Thurstone method of equal-appearing intervals. A total of 147 undergraduate students and 44 pain patients participated in the development of the Japanese response scales. To measure the equivalence of pain descriptors, 62 pain patients in four diagnostic groups were asked to choose pain descriptors that described their pain. In study 2, chronic pain patients (N=126) completed the SF-MPQ-J, the Long-Form McGill Pain Questionnaire Japanese version (LF-MPQ-J), and the 11-point numerical rating scale of pain intensity. Correlation analysis examined the construct validity of the SF-MPQ-J. The results from study 1 were used to develop SF-MPQ-J, which is linguistically equivalent to the original questionnaire. Response scales from SF-MPQ-J represented the original scale values. All pain descriptors, except one, were used by >33% in at least one of the four diagnostic groups. Study 2 exhibited adequate internal consistency and test-retest reliability, with the construct validity of SF-MPQ-J comparable to the original. These findings suggested that SF-MPQ-J is reliable, valid, and cross-culturally equivalent to the original questionnaire. Researchers might consider using this scale in multicenter, multi-ethnical trials or cross-cultural studies that include Japanese-speaking patients. Wiley Periodicals, Inc.
Shape control of an adaptive wing for transonic drag reduction
NASA Astrophysics Data System (ADS)
Austin, Fred; Van Nostrand, William C.
1995-05-01
Theory and experiments to control the static shape of flexible structures by employing internal translational actuators are summarized and plants to extend the work to adaptive wings are presented. Significant reductions in the shock-induced drag are achievable during transonic- cruise by small adaptive modifications to the wing cross-sectional profile. Actuators are employed as truss elements of active ribs to deform the wing cross section. An adaptive-rib model was constructed, and experiments validated the shape-control theory. Plans for future development under an ARPA/AFWAL contract include payoff assessments of the method on an actual aircraft, the development of inchworm TERFENOL-D actuators, and the development of a method to optimize the wing cross-sectional shapes by direct-drag measurements.
Maters, Gemma A.; Sanderman, Robbert; Kim, Aimee Y.; Coyne, James C.
2013-01-01
Objective The Hospital Anxiety and Depression Scale (HADS) is widely used to screen for anxiety and depression. A large literature is citable in support of its validity, but difficulties are increasingly being identified, such as inexplicably discrepant optimal cutpoints and inconsistent factor-structures. This article examines whether these problems could be due to the construction of the HADS that poses difficulties for translation and cross-cultural use. Methods Authors’ awareness of difficulties translating the HADS were identified by examining 20% of studies using the HADS, obtained by a systematic literature search in Pubmed and PsycINFO in May 2012. Reports of use of translations and validation studies were recorded for papers from non-English speaking countries. Narrative and systematic reviews were examined for how authors dealt with different translations. Results Of 417 papers from non-English speaking countries, only 45% indicated whether a translation was used. Studies validating translations were cited in 54%. Seventeen reviews, incorporating data from diverse translated versions, were examined. Only seven mentioned issues of language and culture, and none indicated insurmountable problems in integrating results from different translations. Conclusion Initial decisions concerning item content and response options likely leave the HADS difficult to translate, but we failed to find an acknowledgment of problems in articles involving its translation and cross-cultural use. Investigators’ lack of awareness of these issues can lead to anomalous results and difficulties in interpretation and integration of these results. Reviews tend to overlook these issues and most reviews indiscriminately integrate results from studies performed in different countries. Cross-culturally valid, but literally translated versions of the HADS may not be attainable, and specific cutpoints may not be valid across cultures and language. Claims about rates of anxiety and depression based on integrating cross-cultural data or using the same cutpoint across languages and culture should be subject to critical scrutiny. PMID:23976969
Cross-validation of resting metabolic rate prediction equations
USDA-ARS?s Scientific Manuscript database
Background: Knowledge of the resting metabolic rate (RMR) is necessary for determining individual total energy requirements. Measurement of RMR is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, the accuracy of these equations...
Zhang, Zhaoyang; Fang, Hua; Wang, Honggang
2016-06-01
Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.
Zhang, Zhaoyang; Wang, Honggang
2016-01-01
Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063
Ávila, Christiane Wahast; Riegel, Barbara; Pokorski, Simoni Chiarelli; Camey, Suzi; Silveira, Luana Claudia Jacoby; Rabelo-Silva, Eneida Rejane
2013-01-01
Objective. To adapt and evaluate the psychometric properties of the Brazilian version of the SCHFI v 6.2. Methods. With the approval of the original author, we conducted a complete cross-cultural adaptation of the instrument (translation, synthesis, back translation, synthesis of back translation, expert committee review, and pretesting). The adapted version was named Brazilian version of the self-care of heart failure index v 6.2. The psychometric properties assessed were face validity and content validity (by expert committee review), construct validity (convergent validity and confirmatory factor analysis), and reliability. Results. Face validity and content validity were indicative of semantic, idiomatic, experimental, and conceptual equivalence. Convergent validity was demonstrated by a significant though moderate correlation (r = −0.51) on comparison with equivalent question scores of the previously validated Brazilian European heart failure self-care behavior scale. Confirmatory factor analysis supported the original three-factor model as having the best fit, although similar results were obtained for inadequate fit indices. The reliability of the instrument, as expressed by Cronbach's alpha, was 0.40, 0.82, and 0.93 for the self-care maintenance, self-care management, and self-care confidence scales, respectively. Conclusion. The SCHFI v 6.2 was successfully adapted for use in Brazil. Nevertheless, further studies should be carried out to improve its psychometric properties. PMID:24163765
Lao, Wan-li; He, Yu-chan; Li, Gai-yun; Zhou, Qun
2016-01-01
The biomass to plastic ratio in wood plastic composites (WPCs) greatly affects the physical and mechanical properties and price. Fast and accurate evaluation of the biomass to plastic ratio is important for the further development of WPCs. Quantitative analysis of the WPC main composition currently relies primarily on thermo-analytical methods. However, these methods have some inherent disadvantages, including time-consuming, high analytical errors and sophisticated, which severely limits the applications of these techniques. Therefore, in this study, Fourier Transform Infrared (FTIR) spectroscopy in combination with partial least square (PLS) has been used for rapid prediction of bamboo and polypropylene (PP) content in bamboo/PP composites. The bamboo powders were used as filler after being dried at 105 degrees C for 24 h. PP was used as matrix materials, and some chemical regents were used as additives. Then 42 WPC samples with different ratios of bamboo and PP were prepared by the methods of extrusion. FTIR spectral data of 42 WPC samples were collected by means of KBr pellets technique. The model for bamboo and PP content prediction was developed by PLS-2 and full cross validation. Results of internal cross validation showed that the first derivative spectra in the range of 1 800-800 cm(-1) corrected by standard normal variate (SNV) yielded the optimal model. For both bamboo and PP calibration, the coefficients of determination (R2) were 0.955. The standard errors of calibration (SEC) were 1.872 for bamboo content and 1.848 for PP content, respectively. For both bamboo and PP validation, the R2 values were 0.950. The standard errors of cross validation (SECV) were 1.927 for bamboo content and 1.950 for PP content, respectively. And the ratios of performance to deviation (RPD) were 4.45 for both biomass and PP examinations. The results of external validation showed that the relative prediction deviations for both biomass and PP contents were lower than ± 6%. FTIR combined with PLS can be used for rapid and accurate determination of bamboo and PP content in bamboo/PP composites.
Diagnostic accuracy of eye movements in assessing pedophilia.
Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Witzel, Joachim; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo
2012-07-01
Given that recurrent sexual interest in prepubescent children is one of the strongest single predictors for pedosexual offense recidivism, valid and reliable diagnosis of pedophilia is of particular importance. Nevertheless, current assessment methods still fail to fulfill psychometric quality criteria. The aim of the study was to evaluate the diagnostic accuracy of eye-movement parameters in regard to pedophilic sexual preferences. Eye movements were measured while 22 pedophiles (according to ICD-10 F65.4 diagnosis), 8 non-pedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult. Fixation latency was assessed as a parameter for automatic attentional processes and relative fixation time to account for controlled attentional processes. Receiver operating characteristic (ROC) analyses, which are based on calculated age-preference indices, were carried out to determine the classifier performance. Cross-validation using the leave-one-out method was used to test the validity of classifiers. Pedophiles showed significantly shorter fixation latencies and significantly longer relative fixation times for child stimuli than either of the control groups. Classifier performance analysis revealed an area under the curve (AUC) = 0.902 for fixation latency and an AUC = 0.828 for relative fixation time. The eye-tracking method based on fixation latency discriminated between pedophiles and non-pedophiles with a sensitivity of 86.4% and a specificity of 90.0%. Cross-validation demonstrated good validity of eye-movement parameters. Despite some methodological limitations, measuring eye movements seems to be a promising approach to assess deviant pedophilic interests. Eye movements, which represent automatic attentional processes, demonstrated high diagnostic accuracy. © 2012 International Society for Sexual Medicine.
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
2012-01-01
Background The purpose of this study was to examine the internal consistency, test-retest reliability, construct validity and predictive validity of a new German self-report instrument to assess the influence of social support and the physical environment on physical activity in adolescents. Methods Based on theoretical consideration, the short scales on social support and physical environment were developed and cross-validated in two independent study samples of 9 to 17 year-old girls and boys. The longitudinal sample of Study I (n = 196) was recruited from a German comprehensive school, and subjects in this study completed the questionnaire twice with a between-test interval of seven days. Cronbach’s alphas were computed to determine the internal consistency of the factors. Test-retest reliability of the latent factors was assessed using intra-class coefficients. Factorial validity of the scales was assessed using principle components analysis. Construct validity was determined using a cross-validation technique by performing confirmatory factor analysis with the independent nationwide cross-sectional sample of Study II (n = 430). Correlations between factors and three measures of physical activity (objectively measured moderate-to-vigorous physical activity (MVPA), self-reported habitual MVPA and self-reported recent MVPA) were calculated to determine the predictive validity of the instrument. Results Construct validity of the social support scale (two factors: parental support and peer support) and the physical environment scale (four factors: convenience, public recreation facilities, safety and private sport providers) was shown. Both scales had moderate test-retest reliability. The factors of the social support scale also had good internal consistency and predictive validity. Internal consistency and predictive validity of the physical environment scale were low to acceptable. Conclusions The results of this study indicate moderate to good reliability and construct validity of the social support scale and physical environment scale. Predictive validity was only confirmed for the social support scale but not for the physical environment scale. Hence, it remains unclear if a person’s physical environment has a direct or an indirect effect on physical activity behavior or a moderation function. PMID:22928865
Automated Cross-Sectional Measurement Method of Intracranial Dural Venous Sinuses.
Lublinsky, S; Friedman, A; Kesler, A; Zur, D; Anconina, R; Shelef, I
2016-03-01
MRV is an important blood vessel imaging and diagnostic tool for the evaluation of stenosis, occlusions, or aneurysms. However, an accurate image-processing tool for vessel comparison is unavailable. The purpose of this study was to develop and test an automated technique for vessel cross-sectional analysis. An algorithm for vessel cross-sectional analysis was developed that included 7 main steps: 1) image registration, 2) masking, 3) segmentation, 4) skeletonization, 5) cross-sectional planes, 6) clustering, and 7) cross-sectional analysis. Phantom models were used to validate the technique. The method was also tested on a control subject and a patient with idiopathic intracranial hypertension (4 large sinuses tested: right and left transverse sinuses, superior sagittal sinus, and straight sinus). The cross-sectional area and shape measurements were evaluated before and after lumbar puncture in patients with idiopathic intracranial hypertension. The vessel-analysis algorithm had a high degree of stability with <3% of cross-sections manually corrected. All investigated principal cranial blood sinuses had a significant cross-sectional area increase after lumbar puncture (P ≤ .05). The average triangularity of the transverse sinuses was increased, and the mean circularity of the sinuses was decreased by 6% ± 12% after lumbar puncture. Comparison of phantom and real data showed that all computed errors were <1 voxel unit, which confirmed that the method provided a very accurate solution. In this article, we present a novel automated imaging method for cross-sectional vessels analysis. The method can provide an efficient quantitative detection of abnormalities in the dural sinuses. © 2016 by American Journal of Neuroradiology.
Prediction of functional aerobic capacity without exercise testing
NASA Technical Reports Server (NTRS)
Jackson, A. S.; Blair, S. N.; Mahar, M. T.; Wier, L. T.; Ross, R. M.; Stuteville, J. E.
1990-01-01
The purpose of this study was to develop functional aerobic capacity prediction models without using exercise tests (N-Ex) and to compare the accuracy with Astrand single-stage submaximal prediction methods. The data of 2,009 subjects (9.7% female) were randomly divided into validation (N = 1,543) and cross-validation (N = 466) samples. The validation sample was used to develop two N-Ex models to estimate VO2peak. Gender, age, body composition, and self-report activity were used to develop two N-Ex prediction models. One model estimated percent fat from skinfolds (N-Ex %fat) and the other used body mass index (N-Ex BMI) to represent body composition. The multiple correlations for the developed models were R = 0.81 (SE = 5.3 ml.kg-1.min-1) and R = 0.78 (SE = 5.6 ml.kg-1.min-1). This accuracy was confirmed when applied to the cross-validation sample. The N-Ex models were more accurate than what was obtained from VO2peak estimated from the Astrand prediction models. The SEs of the Astrand models ranged from 5.5-9.7 ml.kg-1.min-1. The N-Ex models were cross-validated on 59 men on hypertensive medication and 71 men who were found to have a positive exercise ECG. The SEs of the N-Ex models ranged from 4.6-5.4 ml.kg-1.min-1 with these subjects.(ABSTRACT TRUNCATED AT 250 WORDS).
Vijayaraj, Ramadoss; Devi, Mekapothula Lakshmi Vasavi; Subramanian, Venkatesan; Chattaraj, Pratim Kumar
2012-06-01
Three-dimensional quantitative structure activity relationship (3D-QSAR) study has been carried out on the Escherichia coli DHFR inhibitors 2,4-diamino-5-(substituted-benzyl)pyrimidine derivatives to understand the structural features responsible for the improved potency. To construct highly predictive 3D-QSAR models, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) methods were used. The predicted models show statistically significant cross-validated and non-cross-validated correlation coefficient of r2 CV and r2 nCV, respectively. The final 3D-QSAR models were validated using structurally diverse test set compounds. Analysis of the contour maps generated from CoMFA and CoMSIA methods reveals that the substitution of electronegative groups at the first and second position along with electropositive group at the third position of R2 substitution significantly increases the potency of the derivatives. The results obtained from the CoMFA and CoMSIA study delineate the substituents on the trimethoprim analogues responsible for the enhanced potency and also provide valuable directions for the design of new trimethoprim analogues with improved affinity. © 2012 John Wiley & Sons A/S.
Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods
NASA Astrophysics Data System (ADS)
Pervez, M.; Henebry, G. M.
2010-12-01
In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.
An HMM model for coiled-coil domains and a comparison with PSSM-based predictions.
Delorenzi, Mauro; Speed, Terry
2002-04-01
Large-scale sequence data require methods for the automated annotation of protein domains. Many of the predictive methods are based either on a Position Specific Scoring Matrix (PSSM) of fixed length or on a window-less Hidden Markov Model (HMM). The performance of the two approaches is tested for Coiled-Coil Domains (CCDs). The prediction of CCDs is used frequently, and its optimization seems worthwhile. We have conceived MARCOIL, an HMM for the recognition of proteins with a CCD on a genomic scale. A cross-validated study suggests that MARCOIL improves predictions compared to the traditional PSSM algorithm, especially for some protein families and for short CCDs. The study was designed to reveal differences inherent in the two methods. Potential confounding factors such as differences in the dimension of parameter space and in the parameter values were avoided by using the same amino acid propensities and by keeping the transition probabilities of the HMM constant during cross-validation. The prediction program and the databases are available at http://www.wehi.edu.au/bioweb/Mauro/Marcoil
Cross-validation of recent and longstanding resting metabolic rate prediction equations
USDA-ARS?s Scientific Manuscript database
Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...
Prediction of energy expenditure and physical activity in preschoolers
USDA-ARS?s Scientific Manuscript database
Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) ...
ERIC Educational Resources Information Center
Acar, Tu¨lin
2014-01-01
In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Computer-aided diagnosis system: a Bayesian hybrid classification method.
Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J
2013-10-01
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.
Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh
2018-04-26
Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.
An evidence based method to calculate pedestrian crossing speeds in vehicle collisions (PCSC).
Bastien, C; Wellings, R; Burnett, B
2018-06-07
Pedestrian accident reconstruction is necessary to establish cause of death, i.e. establishing vehicle collision speed as well as circumstances leading to the pedestrian being impacted and determining culpability of those involved for subsequent court enquiry. Understanding the complexity of the pedestrian attitude during an accident investigation is necessary to ascertain the causes leading to the tragedy. A generic new method, named Pedestrian Crossing Speed Calculator (PCSC), based on vector algebra, is proposed to compute the pedestrian crossing speed at the moment of impact. PCSC uses vehicle damage and pedestrian anthropometric dimensions to establish a combination of head projection angles against the windscreen; this angle is then compared against the combined velocities angle created from the vehicle and the pedestrian crossing speed at the time of impact. This method has been verified using one accident fatality case in which the exact vehicle and pedestrian crossing speeds were known from Police forensic video analysis. PCSC was then applied on two other accident scenarios and correctly corroborated with the witness statements regarding the pedestrians crossing behaviours. The implications of PCSC could be significant once fully validated against further future accident data, as this method is reversible, allowing the computation of vehicle impact velocity from pedestrian crossing speed as well as verifying witness accounts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cross-cultural adaptation and validation of the teamwork climate scale
Silva, Mariana Charantola; Peduzzi, Marina; Sangaleti, Carine Teles; da Silva, Dirceu; Agreli, Heloise Fernandes; West, Michael A; Anderson, Neil R
2016-01-01
ABSTRACT OBJECTIVE To adapt and validate the Team Climate Inventory scale, of teamwork climate measurement, for the Portuguese language, in the context of primary health care in Brazil. METHODS Methodological study with quantitative approach of cross-cultural adaptation (translation, back-translation, synthesis, expert committee, and pretest) and validation with 497 employees from 72 teams of the Family Health Strategy in the city of Campinas, SP, Southeastern Brazil. We verified reliability by the Cronbach’s alpha, construct validity by the confirmatory factor analysis with SmartPLS software, and correlation by the job satisfaction scale. RESULTS We problematized the overlap of items 9, 11, and 12 of the “participation in the team” factor and the “team goals” factor regarding its definition. The validation showed no overlapping of items and the reliability ranged from 0.92 to 0.93. The confirmatory factor analysis indicated suitability of the proposed model with distribution of the 38 items in the four factors. The correlation between teamwork climate and job satisfaction was significant. CONCLUSIONS The version of the scale in Brazilian Portuguese was validated and can be used in the context of primary health care in the Country, constituting an adequate tool for the assessment and diagnosis of teamwork. PMID:27556966
Validity of body composition methods across ethnic population groups.
Deurenberg, P; Deurenberg-Yap, M
2003-10-01
Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.
Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias
2015-01-01
We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do.
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Online cross-validation-based ensemble learning.
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2018-01-30
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Triacylglycerol stereospecific analysis and linear discriminant analysis for milk speciation.
Blasi, Francesca; Lombardi, Germana; Damiani, Pietro; Simonetti, Maria Stella; Giua, Laura; Cossignani, Lina
2013-05-01
Product authenticity is an important topic in dairy sector. Dairy products sold for public consumption must be accurately labelled in accordance with the contained milk species. Linear discriminant analysis (LDA), a common chemometric procedure, has been applied to fatty acid% composition to classify pure milk samples (cow, ewe, buffalo, donkey, goat). All original grouped cases were correctly classified, while 90% of cross-validated grouped cases were correctly classified. Another objective of this research was the characterisation of cow-ewe milk mixtures in order to reveal a common fraud in dairy field, that is the addition of cow to ewe milk. Stereospecific analysis of triacylglycerols (TAG), a method based on chemical-enzymatic procedures coupled with chromatographic techniques, has been carried out to detect fraudulent milk additions, in particular 1, 3, 5% cow milk added to ewe milk. When only TAG composition data were used for the elaboration, 75% of original grouped cases were correctly classified, while totally correct classified samples were obtained when both total and intrapositional TAG data were used. Also the results of cross validation were better when TAG stereospecific analysis data were considered as LDA variables. In particular, 100% of cross-validated grouped cases were obtained when 5% cow milk mixtures were considered.
Vanthomme, Hadrien; Kolowski, Joseph; Nzamba, Brave S; Alonso, Alfonso
2015-10-01
The active field of connectivity conservation has provided numerous methods to identify wildlife corridors with the aim of reducing the ecological effect of fragmentation. Nevertheless, these methods often rely on untested hypotheses of animal movements, usually fail to generate fine-scale predictions of road crossing sites, and do not allow managers to prioritize crossing sites for implementing road fragmentation mitigation measures. We propose a new method that addresses these limitations. We illustrate this method with data from southwestern Gabon (central Africa). We used stratified random transect surveys conducted in two seasons to model the distribution of African forest elephant (Loxodonta cyclotis), forest buffalo (Syncerus caffer nanus), and sitatunga (Tragelaphus spekii) in a mosaic landscape along a 38.5 km unpaved road scheduled for paving. Using a validation data set of recorded crossing locations, we evaluated the performance of three types of models (local suitability, local least-cost movement, and regional least-cost movement) in predicting actual road crossings for each species, and developed a unique and flexible scoring method for prioritizing road sections for the implementation of road fragmentation mitigation measures. With a data set collected in <10 weeks of fieldwork, the method was able to identify seasonal changes in animal movements for buffalo and sitatunga that shift from a local exploitation of the site in the wet season to movements through the study site in the dry season, whereas elephants use the entire study area in both seasons. These three species highlighted the need to use species- and season-specific modeling of movement. From these movement models, the method ranked road sections for their suitability for implementing fragmentation mitigation efforts, allowing managers to adjust priority thresholds based on budgets and management goals. The method relies on data that can be obtained in a period compatible with environmental impact assessment constraints, and is flexible enough to incorporate other potential movement models and scoring criteria. This approach improves upon available methods and can help inform prioritization of road and other linear infrastructure segments that require impact mitigation methods to ensure long-term landscape connectivity.
Novel Screening Tool for Stroke Using Artificial Neural Network.
Abedi, Vida; Goyal, Nitin; Tsivgoulis, Georgios; Hosseinichimeh, Niyousha; Hontecillas, Raquel; Bassaganya-Riera, Josep; Elijovich, Lucas; Metter, Jeffrey E; Alexandrov, Anne W; Liebeskind, David S; Alexandrov, Andrei V; Zand, Ramin
2017-06-01
The timely diagnosis of stroke at the initial examination is extremely important given the disease morbidity and narrow time window for intervention. The goal of this study was to develop a supervised learning method to recognize acute cerebral ischemia (ACI) and differentiate that from stroke mimics in an emergency setting. Consecutive patients presenting to the emergency department with stroke-like symptoms, within 4.5 hours of symptoms onset, in 2 tertiary care stroke centers were randomized for inclusion in the model. We developed an artificial neural network (ANN) model. The learning algorithm was based on backpropagation. To validate the model, we used a 10-fold cross-validation method. A total of 260 patients (equal number of stroke mimics and ACIs) were enrolled for the development and validation of our ANN model. Our analysis indicated that the average sensitivity and specificity of ANN for the diagnosis of ACI based on the 10-fold cross-validation analysis was 80.0% (95% confidence interval, 71.8-86.3) and 86.2% (95% confidence interval, 78.7-91.4), respectively. The median precision of ANN for the diagnosis of ACI was 92% (95% confidence interval, 88.7-95.3). Our results show that ANN can be an effective tool for the recognition of ACI and differentiation of ACI from stroke mimics at the initial examination. © 2017 American Heart Association, Inc.
Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška
2015-08-30
Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2016-12-01
Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.
Validation of a self-administered web-based 24-hour dietary recall among pregnant women.
Savard, Claudia; Lemieux, Simone; Lafrenière, Jacynthe; Laramée, Catherine; Robitaille, Julie; Morisset, Anne-Sophie
2018-04-23
The use of valid dietary assessment methods is crucial to analyse adherence to dietary recommendations among pregnant women. This study aims to assess the relative validity of a self-administered Web-based 24-h dietary recall, the R24W, against a pen-paper 3-day food record (FR) among pregnant women. Sixty (60) pregnant women recruited at 9.3 ± 0.7 weeks of pregnancy in Quebec City completed, at each trimester, 3 R24W and a 3-day FR. Mean energy and nutrient intakes reported by both tools were compared using paired Student T-Tests. Pearson correlations were used to analyze the association between both methods. Agreement between the two methods was evaluated using cross-classification analyses, weighted kappa coefficients and Bland-Altman analyses. Pearson correlation coefficients were all significant, except for vitamin B 12 (r = 0.03; p = 0.83) and ranged from 0.27 to 0.76 (p < 0.05). Differences between mean intakes assessed by the R24W and the FR did not exceed 10% in 19 variables and were not significant for 16 out of 26 variables. In cross-classification analyses, the R24W ranked, on average, 79.1% of participants in the same or adjacent quartiles as the FR. Compared to a 3-day FR, the R24W is a valid method to assess intakes of energy and most nutrients but may be less accurate in the evaluation of intakes of fat (as a proportion of energy intake), vitamin D, zinc and folic acid. During pregnancy, the R24W was a more accurate tool at a group-level than at an individual-level and should, therefore, be used in an epidemiological rather than a clinical setting. The R24W may be particularly valuable as a tool used in cohort studies to provide valid information on pregnant women's dietary intakes and facilitate evaluation of associations between diet and adverse pregnancy outcomes.
NASA Astrophysics Data System (ADS)
Gruy, Frédéric
2014-02-01
Depending on the range of size and the refractive index value, an optically soft particle follows Rayleigh-Debye-Gans or RDG approximation or Van de Hulst approximation. Practically the first one is valid for small particles whereas the second one works for large particles. Klett and Sutherland (Klett JD, Sutherland RA. App. Opt. 1992;31:373) proved that the Wentzel-Kramers-Brillouin or WKB approximation leads to accurate values of the differential scattering cross section of sphere and cylinder over a wide range of size. In this paper we extend the work of Klett and Sutherland by proposing a method allowing a fast calculation of the differential scattering cross section for any shape of particle with a given orientation and illuminated by unpolarized light. Our method is based on a geometrical approximation of the particle by replacing each geometrical cross section by an ellipse and then by exactly evaluating the differential scattering cross section of the newly generated body. The latter one contains only two single integrals.
A cross-validation package driving Netica with python
Fienen, Michael N.; Plant, Nathaniel G.
2014-01-01
Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).
Kennicutt, A R; Morkowchuk, L; Krein, M; Breneman, C M; Kilduff, J E
2016-08-01
A quantitative structure-activity relationship was developed to predict the efficacy of carbon adsorption as a control technology for endocrine-disrupting compounds, pharmaceuticals, and components of personal care products, as a tool for water quality professionals to protect public health. Here, we expand previous work to investigate a broad spectrum of molecular descriptors including subdivided surface areas, adjacency and distance matrix descriptors, electrostatic partial charges, potential energy descriptors, conformation-dependent charge descriptors, and Transferable Atom Equivalent (TAE) descriptors that characterize the regional electronic properties of molecules. We compare the efficacy of linear (Partial Least Squares) and non-linear (Support Vector Machine) machine learning methods to describe a broad chemical space and produce a user-friendly model. We employ cross-validation, y-scrambling, and external validation for quality control. The recommended Support Vector Machine model trained on 95 compounds having 23 descriptors offered a good balance between good performance statistics, low error, and low probability of over-fitting while describing a wide range of chemical features. The cross-validated model using a log-uptake (qe) response calculated at an aqueous equilibrium concentration (Ce) of 1 μM described the training dataset with an r(2) of 0.932, had a cross-validated r(2) of 0.833, and an average residual of 0.14 log units.
NASA Astrophysics Data System (ADS)
Abdel-Jaber, H.; Glisic, B.
2014-07-01
Structural health monitoring (SHM) consists of the continuous or periodic measurement of structural parameters and their analysis with the aim of deducing information about the performance and health condition of a structure. The significant increase in the construction of prestressed concrete bridges motivated this research on an SHM method for the on-site determination of the distribution of prestressing forces along prestressed concrete beam structures. The estimation of the distribution of forces is important as it can give information regarding the overall performance and structural integrity of the bridge. An inadequate transfer of the designed prestressing forces to the concrete cross-section can lead to a reduced capacity of the bridge and consequently malfunction or failure at lower loads than predicted by design. This paper researches a universal method for the determination of the distribution of prestressing forces along concrete beam structures at the time of transfer of the prestressing force (e.g., at the time of prestressing or post-tensioning). The method is based on the use of long-gauge fiber optic sensors, and the sensor network is similar (practically identical) to the one used for damage identification. The method encompasses the determination of prestressing forces at both healthy and cracked cross-sections, and for the latter it can yield information about the condition of the cracks. The method is validated on-site by comparison to design forces through the application to two structures: (1) a deck-stiffened arch and (2) a curved continuous girder. The uncertainty in the determination of prestressing forces was calculated and the comparison with the design forces has shown very good agreement in most of the structures’ cross-sections, but also helped identify some unusual behaviors. The method and its validation are presented in this paper.
Combined 3D-QSAR modeling and molecular docking study on azacycles CCR5 antagonists
NASA Astrophysics Data System (ADS)
Ji, Yongjun; Shu, Mao; Lin, Yong; Wang, Yuanqiang; Wang, Rui; Hu, Yong; Lin, Zhihua
2013-08-01
The beta chemokine receptor 5 (CCR5) is an attractive target for pharmaceutical industry in the HIV-1, inflammation and cancer therapeutic areas. In this study, we have developed quantitative structure activity relationship (QSAR) models for a series of 41 azacycles CCR5 antagonists using comparative molecular field analysis (CoMFA), comparative molecular similarity indices analysis (CoMSIA), and Topomer CoMFA methods. The cross-validated coefficient q2 values of 3D-QASR (CoMFA, CoMSIA, and Topomer CoMFA) methods were 0.630, 0.758, and 0.852, respectively, the non-cross-validated R2 values were 0.979, 0.978, and 0.990, respectively. Docking studies were also employed to determine the most probable binding mode. 3D contour maps and docking results suggested that bulky groups and electron-withdrawing groups on the core part would decrease antiviral activity. Furthermore, docking results indicated that H-bonds and π bonds were favorable for antiviral activities. Finally, a set of novel derivatives with predicted activities were designed.
Zhang, Hui; Ren, Ji-Xia; Kang, Yan-Li; Bo, Peng; Liang, Jun-Yu; Ding, Lan; Kong, Wei-Bao; Zhang, Ji
2017-08-01
Toxicological testing associated with developmental toxicity endpoints are very expensive, time consuming and labor intensive. Thus, developing alternative approaches for developmental toxicity testing is an important and urgent task in the drug development filed. In this investigation, the naïve Bayes classifier was applied to develop a novel prediction model for developmental toxicity. The established prediction model was evaluated by the internal 5-fold cross validation and external test set. The overall prediction results for the internal 5-fold cross validation of the training set and external test set were 96.6% and 82.8%, respectively. In addition, four simple descriptors and some representative substructures of developmental toxicants were identified. Thus, we hope the established in silico prediction model could be used as alternative method for toxicological assessment. And these obtained molecular information could afford a deeper understanding on the developmental toxicants, and provide guidance for medicinal chemists working in drug discovery and lead optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.
Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe
2012-01-01
Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.
Cross-Validation of easyCBM Reading Cut Scores in Washington: 2009-2010. Technical Report #1109
ERIC Educational Resources Information Center
Irvin, P. Shawn; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald
2011-01-01
This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Washington state. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 900 students per grade, randomly split into two…
Jet production in the CoLoRFulNNLO method: Event shapes in electron-positron collisions
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Szőr, Zoltán; Trócsányi, Zoltán; Tulipánt, Zoltán
2016-10-01
We present the CoLoRFulNNLO method to compute higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the computation of event shape observables in electron-positron collisions at NNLO accuracy and validate our code by comparing our predictions to previous results in the literature. We also calculate for the first time jet cone energy fraction at NNLO.
Liu, A; Byrne, N M; Ma, G; Nasreddine, L; Trinidad, T P; Kijboonchoo, K; Ismail, M N; Kagawa, M; Poh, B K; Hills, A P
2011-12-01
To develop and cross-validate bioelectrical impedance analysis (BIA) prediction equations of total body water (TBW) and fat-free mass (FFM) for Asian pre-pubertal children from China, Lebanon, Malaysia, Philippines and Thailand. Height, weight, age, gender, resistance and reactance measured by BIA were collected from 948 Asian children (492 boys and 456 girls) aged 8-10 years from the five countries. The deuterium dilution technique was used as the criterion method for the estimation of TBW and FFM. The BIA equations were developed using stepwise multiple regression analysis and cross-validated using the Bland-Altman approach. The BIA prediction equation for the estimation of TBW was as follows: TBW=0.231 × height(2)/resistance+0.066 × height+0.188 × weight+0.128 × age+0.500 × sex-0.316 × Thais-4.574 (R (2)=88.0%, root mean square error (RMSE)=1.3 kg), and for the estimation of FFM was as follows: FFM=0.299 × height(2)/resistance+0.086 × height+0.245 × weight+0.260 × age+0.901 × sex-0.415 × ethnicity (Thai ethnicity =1, others = 0)-6.952 (R (2)=88.3%, RMSE=1.7 kg). No significant difference between measured and predicted values for the whole cross-validation sample was found. However, the prediction equation for estimation of TBW/FFM tended to overestimate TBW/FFM at lower levels whereas underestimate at higher levels of TBW/FFM. Accuracy of the general equation for TBW and FFM was also valid at each body mass index category. Ethnicity influences the relationship between BIA and body composition in Asian pre-pubertal children. The newly developed BIA prediction equations are valid for use in Asian pre-pubertal children.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875
Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G
2014-01-01
To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.
Li, Shun-Lai; He, Mao-Yu; Du, Hong-Guang
2011-01-01
The active metabolite of the novel immunosuppressive agent leflunomide has been shown to inhibit the enzyme dihydroorotate dehydrogenase (DHODH). This enzyme catalyzes the fourth step in de novo pyrimidine biosynthesis. Self-organizing molecular field analysis (SOMFA), a simple three-dimensional quantitative structure-activity relationship (3D-QSAR) method is used to study the correlation between the molecular properties and the biological activities of a series of analogues of the active metabolite. The statistical results, cross-validated rCV2 (0.664) and non cross-validated r2 (0.687), show a good predictive ability. The final SOMFA model provides a better understanding of DHODH inhibitor-enzyme interactions, and may be useful for further modification and improvement of inhibitors of this important enzyme. PMID:21686163
exprso: an R-package for the rapid implementation of machine learning algorithms.
Quinn, Thomas; Tylee, Daniel; Glatt, Stephen
2016-01-01
Machine learning plays a major role in many scientific investigations. However, non-expert programmers may struggle to implement the elaborate pipelines necessary to build highly accurate and generalizable models. We introduce exprso , a new R package that is an intuitive machine learning suite designed specifically for non-expert programmers. Built initially for the classification of high-dimensional data, exprso uses an object-oriented framework to encapsulate a number of common analytical methods into a series of interchangeable modules. This includes modules for feature selection, classification, high-throughput parameter grid-searching, elaborate cross-validation schemes (e.g., Monte Carlo and nested cross-validation), ensemble classification, and prediction. In addition, exprso also supports multi-class classification (through the 1-vs-all generalization of binary classifiers) and the prediction of continuous outcomes.
Gupta, Meenal; Moily, Nagaraj S; Kaur, Harpreet; Jajodia, Ajay; Jain, Sanjeev; Kukreti, Ritushree
2013-08-01
Atypical antipsychotic (AAP) drugs are the preferred choice of treatment for schizophrenia patients. Patients who do not show favorable response to AAP monotherapy are subjected to random prolonged therapeutic treatment with AAP multitherapy, typical antipsychotics or a combination of both. Therefore, prior identification of patients' response to drugs can be an important step in providing efficacious and safe therapeutic treatment. We thus attempted to elucidate a genetic signature which could predict patients' response to AAP monotherapy. Our logistic regression analyses indicated the probability that 76% patients carrying combination of four SNPs will not show favorable response to AAP therapy. The robustness of this prediction model was assessed using repeated 10-fold cross validation method, and the results across n-fold cross-validations (mean accuracy=71.91%; 95%CI=71.47-72.35) suggest high accuracy and reliability of the prediction model. Further validations of these results in large sample sets are likely to establish their clinical applicability. Copyright © 2013 Elsevier Inc. All rights reserved.
Poster - 18: New features in EGSnrc for photon cross sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Elsayed; Mainegra-Hing, Ernesto; Rogers, Davi
2016-08-15
Purpose: To implement two new features in the EGSnrc Monte Carlo system. The first is an option to account for photonuclear attenuation, which can contribute a few percent to the total cross section at the higher end of the energy range of interest to medical physics. The second is an option to use exact NIST XCOM photon cross sections. Methods: For the first feature, the photonuclear total cross sections are generated from the IAEA evaluated data. In the current, first-order implementation, after a photonuclear event, there is no energy deposition or secondary particle generation. The implementation is validated against deterministicmore » calculations and experimental measurements of transmission signals. For the second feature, before this work, if the user explicitly requested XCOM photon cross sections, EGSnrc still used its own internal incoherent scattering cross sections. These differ by up to 2% from XCOM data between 30 keV and 40 MeV. After this work, exact XCOM incoherent scattering cross sections are an available option. Minor interpolation artifacts in pair and triplet XCOM cross sections are also addressed. The default for photon cross section in EGSnrc is XCOM except for the new incoherent scattering cross sections, which have to be explicitly requested. The photonuclear, incoherent, pair and triplet data from this work are available for elements and compounds for photon energies from 1 keV to 100 GeV. Results: Both features are implemented and validated in EGSnrc.Conclusions: The two features are part of the standard EGSnrc distribution as of version 4.2.3.2.« less
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Stoyanova, Rumyana; Dimova, Rositsa; Tarnovska, Miglena; Boeva, Tatyana
2018-05-20
Patient safety (PS) is one of the essential elements of health care quality and a priority of healthcare systems in most countries. Thus the creation of validated instruments and the implementation of systems that measure patient safety are considered to be of great importance worldwide. The present paper aims to illustrate the process of linguistic validation, cross-cultural verification and adaptation of the Bulgarian version of the Hospital Survey on Patient Safety Culture (B-HSOPSC) and its test-retest reliability. The study design is cross-sectional. The HSOPSC questionnaire consists of 42 questions, grouped in 12 different subscales that measure patient safety culture. Internal con-sistency was assessed using Cronbach's alpha. The Wilcoxon signed-rank test and the split-half method were used; the Spear-man-Brown coefficient was calculated. The overall Cronbach's alpha for B-HSOPSC is 0.918. Subscales 7 Staffing and 12 Overall perceptions of safety had the lowest coefficients. The high reliability of the instrument was confirmed by the Split-half method (0.97) and ICC-coefficient (0.95). The lowest values of Spearmen-Broun coefficients were found in items A13 and A14. The study offers an analysis of the results of the linguistic validation of the B-HSOPSC and its test-retest reliability. The psychometric characteristics of the questions revealed good validity and reliability, except two questions. In the future, the instrument will be administered to the target population in the main study so that the psychometric properties of the instrument can be verified.
Assessing cross-cultural differences through use of multiple-group invariance analyses.
Stein, Judith A; Lee, Jerry W; Jones, Patricia S
2006-12-01
The use of structural equation modeling in cross-cultural personality research has become a popular method for testing measurement invariance. In this report, we present an example of testing measurement invariance using the Sense of Coherence Scale of Antonovsky (1993) in 3 ethnic groups: Chinese, Japanese, and Whites. In a series of increasingly restrictive constraints on the measurement models of the 3 groups, we demonstrate how to assess differences among the groups. We also provide an example of construct validation.
On the Relation Between Spherical Harmonics and Simplified Spherical Harmonics Methods
NASA Astrophysics Data System (ADS)
Coppa, G. G. M.; Giusti, V.; Montagnini, B.; Ravetto, P.
2010-03-01
The purpose of the paper is, first, to recall the proof that the AN method and, therefore, the SP2N-1 method (of which AN was shown to be a variant) are equivalent to the odd order P2N-1, at least for a particular class of multi-region problems; namely the problems for which the total cross section has the same value for all the regions and the scattering is supposed to be isotropic. By virtue of the introduction of quadrature formulas representing first collision probabilities, this class is then enlarged in order to encompass the systems in which the regions may have different total cross sections. Some examples are reported to numerically validate the procedure.
Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108
ERIC Educational Resources Information Center
Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald
2011-01-01
This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…
Castro-Vega, Iciar; Veses Martín, Silvia; Cantero Llorca, Juana; Barrios Marta, Cristina; Bañuls, Celia; Hernández-Mijares, Antonio
2018-03-09
Nutritional screening allows for the detection of nutritional risk. Validated tools should be implemented, and their usefulness should be contrasted with a gold standard. The aim of this study is to discover the validity, efficacy and reliability of 3 nutritional screening tools in relation to complete nutritional assessment. A sub-analysis of a cross-sectional and descriptive study on the prevalence of disease-related malnutrition. The sample was selected from outpatients, hospitalized and institutionalized patients. MUST, MNAsf and MST screening were employed. A nutritional assessment of all the patients was undertaken. The SENPE-SEDOM consensus was used for the diagnosis. In the outpatients, both MUST and MNAsf have a similar validity in relation to the nutritional assessment (AUC 0.871 and 0.883, respectively). In the institutionalized patients, the MUST screening method is the one that shows the greatest validity (AUC 0.815), whereas in the hospitalized patients, the most valid methods are both MUST and MST (AUC 0.868 and 0.853, respectively). It is essential to use nutritional screening to invest the available resources wisely. Based on our results, MUST is the most suitable screening method in hospitalized and institutionalized patients. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
NASA Astrophysics Data System (ADS)
Battistella, C.; Robinson, D.; McQuarrie, N.; Ghoshal, S.
2017-12-01
Multiple valid balanced cross sections can be produced from mapped surface and subsurface data. By integrating low temperature thermochronologic data, we are better able to predict subsurface geometries. Existing valid balanced cross section for far western Nepal are few (Robinson et al., 2006) and do not incorporate thermochronologic data because the data did not exist. The data published along the Simikot cross section along the Karnali River since then include muscovite Ar, zircon U-Th/He and apatite fission track. We present new mapping and a new valid balanced cross section that takes into account the new field data as well as the limitations that thermochronologic data places on the kinematics of the cross section. Additional constrains include some new geomorphology data acquired since 2006 that indicate areas of increased vertical uplift, which indicate locations of buried ramps in the Main Himalayan thrust and guide the locations of Lesser Himalayan ramps in the balanced cross section. Future work will include flexural modeling, new low temperature thermochronometic data, and 2-D thermokinematic models from a sequentially forward modeled balanced cross sections in far western Nepal.
Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.
Indicators of Ecological Change
2005-03-01
Vine 0.07 0.26 Yellow Jasmine LM Gymnopogon ambiguus Graminae Cryptophyte Geophyte Grass 0.17 0.45 Beard grass RLD Haplopappus divaricatus Asteraceae...cross-validation procedure. The cross-validation analysis 7 determines the percentage of observations correctly classified. In essence , a cross-8
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
An Efficient Method for Classifying Perfectionists
ERIC Educational Resources Information Center
Rice, Kenneth G.; Ashby, Jeffrey S.
2007-01-01
Multiple samples of university students (N = 1,537) completed the Almost Perfect Scale-Revised (APS-R; R. B. Slaney, M. Mobley, J. Trippi, J. Ashby, & D. G. Johnson, 1996). Cluster analyses, cross-validated discriminant function analyses, and receiver operating characteristic curves for sensitivity and specificity of APS-R scores were used to…
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Varabyova, Yauheniya; Müller, Julia-Maria
2016-03-01
There has been an ongoing interest in the analysis and comparison of the efficiency of health care systems using nonparametric and parametric applications. The objective of this study was to review the current state of the literature and to synthesize the findings on health system efficiency in OECD countries. We systematically searched five electronic databases through August 2014 and identified 22 studies that analyzed the efficiency of health care production at the country level. We summarized these studies with view on their sample, methods, and utilized variables. We developed and applied a checklist of 14 items to assess the quality of the reviewed studies along four dimensions: reporting, external validity, bias, and power. Moreover, to examine the internal validity of findings we meta-analyzed the efficiency estimates reported in 35 models from ten studies. The qualitative synthesis of the literature indicated large differences in study designs and methods. The meta-analysis revealed low correlations between country rankings suggesting a lack of internal validity of the efficiency estimates. In conclusion, methodological problems of existing cross-country comparisons of the efficiency of health care systems draw into question the ability of these comparisons to provide meaningful guidance to policy-makers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cross-Validation of Predictor Equations for Armor Crewman Performance
1980-01-01
Technical Report 447 CROSS-VALIDATION OF PREDICTOR EQUATIONS FOR ARMOR CREWMAN PERFORMANCE Anthony J. Maitland , Newell K. Eaton, and Janet F. Neft...ORG. REPORT NUMBER Anthony J/ Maitland . Newell K/EatorV. and B OTATO RN UBR. 9- PERFORMING ORGANIZATION NAME AND ADDRESS I0. PROGRAM ELEMENT, PROJECT...Technical Report 447 CROSS-VALIDATION OF PREDICTOR EQUATIONS FOR ARMOR CREWMAN PERFORMANCE Anthony J. Maitland , Newell K. Eaton, Accession For and
Methodological considerations when translating “burnout”☆
Squires, Allison; Finlayson, Catherine; Gerchow, Lauren; Cimiotti, Jeannie P.; Matthews, Anne; Schwendimann, Rene; Griffiths, Peter; Busse, Reinhard; Heinen, Maude; Brzostek, Tomasz; Moreno-Casbas, Maria Teresa; Aiken, Linda H.; Sermeus, Walter
2014-01-01
No study has systematically examined how researchers address cross-cultural adaptation of burnout. We conducted an integrative review to examine how researchers had adapted the instruments to the different contexts. We reviewed the Content Validity Indexing scores for the Maslach Burnout Inventory-Human Services Survey from the 12-country comparative nursing workforce study, RN4CAST. In the integrative review, multiple issues related to translation were found in existing studies. In the cross-cultural instrument analysis, 7 out of 22 items on the instrument received an extremely low kappa score. Investigators may need to employ more rigorous cross-cultural adaptation methods when attempting to measure burnout. PMID:25343131
Waples, Robin S
2010-07-01
Recognition of the importance of cross-validation ('any technique or instance of assessing how the results of a statistical analysis will generalize to an independent dataset'; Wiktionary, en.wiktionary.org) is one reason that the U.S. Securities and Exchange Commission requires all investment products to carry some variation of the disclaimer, 'Past performance is no guarantee of future results.' Even a cursory examination of financial behaviour, however, demonstrates that this warning is regularly ignored, even by those who understand what an independent dataset is. In the natural sciences, an analogue to predicting future returns for an investment strategy is predicting power of a particular algorithm to perform with new data. Once again, the key to developing an unbiased assessment of future performance is through testing with independent data--that is, data that were in no way involved in developing the method in the first place. A 'gold-standard' approach to cross-validation is to divide the data into two parts, one used to develop the algorithm, the other used to test its performance. Because this approach substantially reduces the sample size that can be used in constructing the algorithm, researchers often try other variations of cross-validation to accomplish the same ends. As illustrated by Anderson in this issue of Molecular Ecology Resources, however, not all attempts at cross-validation produce the desired result. Anderson used simulated data to evaluate performance of several software programs designed to identify subsets of loci that can be effective for assigning individuals to population of origin based on multilocus genetic data. Such programs are likely to become increasingly popular as researchers seek ways to streamline routine analyses by focusing on small sets of loci that contain most of the desired signal. Anderson found that although some of the programs made an attempt at cross-validation, all failed to meet the 'gold standard' of using truly independent data and therefore produced overly optimistic assessments of power of the selected set of loci--a phenomenon known as 'high grading bias.'
Cross-Validation of the Africentrism Scale.
ERIC Educational Resources Information Center
Kwate, Naa Oyo A.
2003-01-01
Cross-validated the Africentrism Scale, investigating the relationship between Africentrism and demographic variables in a diverse sample of individuals of African descent. Results indicated that the scale demonstrated solid internal consistency and convergent validity. Age and education related to Africentrism, with younger and less educated…
Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image
NASA Astrophysics Data System (ADS)
Pirotti, F.; Sunar, F.; Piragnolo, M.
2016-06-01
Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performance.
Cross-Link Guided Molecular Modeling with ROSETTA
Leitner, Alexander; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
2013-01-01
Chemical cross-links identified by mass spectrometry generate distance restraints that reveal low-resolution structural information on proteins and protein complexes. The technology to reliably generate such data has become mature and robust enough to shift the focus to the question of how these distance restraints can be best integrated into molecular modeling calculations. Here, we introduce three workflows for incorporating distance restraints generated by chemical cross-linking and mass spectrometry into ROSETTA protocols for comparative and de novo modeling and protein-protein docking. We demonstrate that the cross-link validation and visualization software Xwalk facilitates successful cross-link data integration. Besides the protocols we introduce XLdb, a database of chemical cross-links from 14 different publications with 506 intra-protein and 62 inter-protein cross-links, where each cross-link can be mapped on an experimental structure from the Protein Data Bank. Finally, we demonstrate on a protein-protein docking reference data set the impact of virtual cross-links on protein docking calculations and show that an inter-protein cross-link can reduce on average the RMSD of a docking prediction by 5.0 Å. The methods and results presented here provide guidelines for the effective integration of chemical cross-link data in molecular modeling calculations and should advance the structural analysis of particularly large and transient protein complexes via hybrid structural biology methods. PMID:24069194
The composite dynamic method as evidence for age-specific waterfowl mortality
Burnham, Kenneth P.; Anderson, David R.
1979-01-01
For the past 25 years estimation of mortality rates for waterfowl has been based almost entirely on the composite dynamic life table. We examined the specific assumptions for this method and derived a valid goodness of fit test. We performed this test on 45 data sets representing a cross section of banded sampled for various waterfowl species, geographic areas, banding periods, and age/sex classes. We found that: (1) the composite dynamic method was rejected (P <0.001) in 37 of the 45 data sets (in fact, 29 were rejected at P <0.00001) and (2) recovery and harvest rates are year-specific (a critical violation of the necessary assumptions). We conclude that the restrictive assumptions required for the composite dynamic method to produce valid estimates of mortality rates are not met in waterfowl data. Also we demonstrate that even when the required assumptions are met, the method produces very biased estimates of age-specific mortality rates. We believe the composite dynamic method should not be used in the analysis of waterfowl banding data. Furthermore, the composite dynamic method does not provide valid evidence for age-specific mortality rates in waterfowl.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
Sierakowska, Matylda; Sierakowski, Stanisław; Sierakowska, Justyna; Horton, Mike; Ndosi, Mwidimi
2015-03-01
To undertake cross-cultural adaptation and validation of the educational needs assessment tool (ENAT) for use with people with rheumatoid arthritis (RA) and systemic sclerosis (SSc) in Poland. The study involved two main phases: (1) cross-cultural adaptation of the ENAT from English into Polish and (2) Cross-cultural validation of Polish Educational Needs Assessment Tool (Pol-ENAT). The first phase followed an established process of cross-cultural adaptation of self-report measures. The second phase involved completion of the Pol-ENAT by patients and subjecting the data to Rasch analysis to assess the construct validity, unidimensionality, internal consistency and cross-cultural invariance. An adequate conceptual equivalence was achieved following the adaptation process. The dataset for validation comprised a total of 278 patients, 237 (85.3 %) of which were female. In each disease group (145, RA and 133, SSc), the 7 domains of the Pol-ENAT were found to fit the Rasch model, X (2)(df) = 16.953(14), p = 0.259 and 8.132(14), p = 0.882 for RA and SSc, respectively. Internal consistency of the Pol-ENAT was high (patient separation index = 0.85 and 0.89 for SSc and RA, respectively), and unidimensionality was confirmed. Cross-cultural differential item functioning (DIF) was detected in some subscales, and DIF-adjusted conversion tables were calibrated to enable cross-cultural comparison of data between Poland and the UK. Using a standard process in cross-cultural adaptation, conceptual equivalence was achieved between the original (UK) ENAT and the adapted Pol-ENAT. Fit to the Rasch model, confirmed that the construct validity, unidimensionality and internal consistency of the ENAT have been preserved.
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
Internal Consistency, Retest Reliability, and their Implications For Personality Scale Validity
McCrae, Robert R.; Kurtz, John E.; Yamagata, Shinji; Terracciano, Antonio
2010-01-01
We examined data (N = 34,108) on the differential reliability and validity of facet scales from the NEO Inventories. We evaluated the extent to which (a) psychometric properties of facet scales are generalizable across ages, cultures, and methods of measurement; and (b) validity criteria are associated with different forms of reliability. Composite estimates of facet scale stability, heritability, and cross-observer validity were broadly generalizable. Two estimates of retest reliability were independent predictors of the three validity criteria; none of three estimates of internal consistency was. Available evidence suggests the same pattern of results for other personality inventories. Internal consistency of scales can be useful as a check on data quality, but appears to be of limited utility for evaluating the potential validity of developed scales, and it should not be used as a substitute for retest reliability. Further research on the nature and determinants of retest reliability is needed. PMID:20435807
Jackson, Lauren S; Al-Taher, Fadwa M; Moorman, Mark; DeVries, Jonathan W; Tippett, Roger; Swanson, Katherine M J; Fu, Tong-Jen; Salter, Robert; Dunaif, George; Estes, Susan; Albillos, Silvia; Gendel, Steven M
2008-02-01
Food allergies affect an estimated 10 to 12 million people in the United States. Some of these individuals can develop life-threatening allergic reactions when exposed to allergenic proteins. At present, the only successful method to manage food allergies is to avoid foods containing allergens. Consumers with food allergies rely on food labels to disclose the presence of allergenic ingredients. However, undeclared allergens can be inadvertently introduced into a food via cross-contact during manufacturing. Although allergen removal through cleaning of shared equipment or processing lines has been identified as one of the critical points for effective allergen control, there is little published information on the effectiveness of cleaning procedures for removing allergenic materials from processing equipment. There also is no consensus on how to validate or verify the efficacy of cleaning procedures. The objectives of this review were (i) to study the incidence and cause of allergen cross-contact, (ii) to assess the science upon which the cleaning of food contact surfaces is based, (iii) to identify best practices for cleaning allergenic foods from food contact surfaces in wet and dry manufacturing environments, and (iv) to present best practices for validating and verifying the efficacy of allergen cleaning protocols.
Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake
2017-01-01
This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.
Zumpano, Camila Eugênia; Mendonça, Tânia Maria da Silva; Silva, Carlos Henrique Martins da; Correia, Helena; Arnold, Benjamin; Pinto, Rogério de Melo Costa
2017-01-23
This study aimed to perform the cross-cultural adaptation and validation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health scale in the Portuguese language. The ten Global Health items were cross-culturally adapted by the method proposed in the Functional Assessment of Chronic Illness Therapy (FACIT). The instrument's final version in Portuguese was self-administered by 1,010 participants in Brazil. The scale's precision was verified by floor and ceiling effects analysis, reliability of internal consistency, and test-retest reliability. Exploratory and confirmatory factor analyses were used to assess the construct's validity and instrument's dimensionality. Calibration of the items used the Gradual Response Model proposed by Samejima. Four global items required adjustments after the pretest. Analysis of the psychometric properties showed that the Global Health scale has good reliability, with Cronbach's alpha of 0.83 and intra-class correlation of 0.89. Exploratory and confirmatory factor analyses showed good fit in the previously established two-dimensional model. The Global Physical Health and Global Mental Health scale showed good latent trait coverage according to the Gradual Response Model. The PROMIS Global Health items showed equivalence in Portuguese compared to the original version and satisfactory psychometric properties for application in clinical practice and research in the Brazilian population.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Medin, Anine Christine; Carlsen, Monica Hauger; Andersen, Lene Frost
2016-12-01
To validate estimated intakes of carotenoid-rich foods from a web-based food recall (WebFR) using carotenoids in blood as an objective reference method. Cross-sectional validation study using carotenoids in plasma to evaluate estimated intakes of selected carotenoid-rich foods. Participants recorded their food intake in the WebFR and plasma concentrations of β-carotene, α-carotene, β-cryptoxanthin, lycopene, lutein and zeaxanthin were measured. Schools and homes of families in a suburb of the capital of Norway. A total of 261 participants in the age groups 8-9 and 12-14 years. Spearman's rank correlation coefficients ranged from 0·30 to 0·44, and cross-classification showed that 71·6-76·6 % of the participants were correctly classified, when comparing the reported intakes of carotenoid-rich foods and concentrations of the corresponding carotenoids in plasma, not including lutein and zeaxanthin. Correlations were acceptable and cross-classification analyses demonstrated that the WebFR was able to rank participants according to their reported intake of foods rich in α-carotene, β-carotene, β-cryptoxanthin and lycopene. The WebFR is a promising tool for dietary assessment among children and adolescents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tommasi, J.; Ruggieri, J. M.; Lebrat, J. F.
The latest release (2.1) of the ERANOS code system, using JEF-2.2, JEFF-3.1 and ENDF/B-VI r8 multigroup cross-section libraries is currently being validated on fast reactor critical experiments at CEA-Cadarache (France). This paper briefly presents the library effect studies and the detailed best-estimate validation studies performed up to now as part of the validation process. The library effect studies are performed over a wide range of experimental configurations, using simple model and method options. They yield global trends about the shift from JEF-2.2 to JEFF-3.1 cross-section libraries, that can be related to individual sensitivities and cross-section changes. The more detailed, best-estimate,more » calculations have been performed up to now over three experimental configurations carried out in the MASURCA critical facility at CEA-Cadarache: two cores with a softened spectrum due to large amounts of graphite (MAS1A' and MAS1B), and a core representative of sodium-cooled fast reactors (CIRANO ZONA2A). Calculated values have been compared to measurements, and discrepancies analyzed in detail using perturbation theory. Values calculated with JEFF-3.1 were found to be within 3 standard deviations of the measured values, and at least of the same quality as the JEF-2.2 based results. (authors)« less
NASA Astrophysics Data System (ADS)
Karemore, Gopal; Nielsen, Mads; Karssemeijer, Nico; Brandt, Sami S.
2014-11-01
It is well understood nowadays that changes in the mammographic parenchymal pattern are an indicator of a risk of breast cancer and we have developed a statistical method that estimates the mammogram regions where the parenchymal changes, due to breast cancer, occur. This region of interest is computed from a score map by utilising the anatomical breast coordinate system developed in our previous work. The method also makes an automatic scale selection to avoid overfitting while the region estimates are computed by a nested cross-validation scheme. In this way, it is possible to recover those mammogram regions that show a significant difference in classification scores between the cancer and the control group. Our experiments suggested that the most significant mammogram region is the region behind the nipple and that can be justified by previous findings from other research groups. This result was conducted on the basis of the cross-validation experiments on independent training, validation and testing sets from the case-control study of 490 women, of which 245 women were diagnosed with breast cancer within a period of 2-4 years after the baseline mammograms. We additionally generalised the estimated region to another, mini-MIAS study and showed that the transferred region estimate gives at least a similar classification result when compared to the case where the whole breast region is used. In all, by following our method, one most likely improves both preclinical and follow-up breast cancer screening, but a larger study population will be required to test this hypothesis.
Real-time sensor data validation
NASA Technical Reports Server (NTRS)
Bickmore, Timothy W.
1994-01-01
This report describes the status of an on-going effort to develop software capable of detecting sensor failures on rocket engines in real time. This software could be used in a rocket engine controller to prevent the erroneous shutdown of an engine due to sensor failures which would otherwise be interpreted as engine failures by the control software. The approach taken combines analytical redundancy with Bayesian belief networks to provide a solution which has well defined real-time characteristics and well-defined error rates. Analytical redundancy is a technique in which a sensor's value is predicted by using values from other sensors and known or empirically derived mathematical relations. A set of sensors and a set of relations among them form a network of cross-checks which can be used to periodically validate all of the sensors in the network. Bayesian belief networks provide a method of determining if each of the sensors in the network is valid, given the results of the cross-checks. This approach has been successfully demonstrated on the Technology Test Bed Engine at the NASA Marshall Space Flight Center. Current efforts are focused on extending the system to provide a validation capability for 100 sensors on the Space Shuttle Main Engine.
Tipton, John; Hooten, Mevin B.; Goring, Simon
2017-01-01
Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio-temporal temperature from a very sparse historical record.
Sayed, Abdul-Rauf; le Grange, Cynthia; Bhagwan, Susheela; Manga, Nayna
2015-01-01
Background Measuring primary care is important for health sector reform. The Primary Care Assessment Tool (PCAT) measures performance of elements essential for cost-effective care. Following minor adaptations prior to use in Cape Town in 2011, a few findings indicated a need to improve the content and cross-cultural validity for wider use in South Africa (SA). Aim This study aimed to validate the United States of America-developed PCAT before being used in a baseline measure of primary care performance prior to major reform. Setting Public sector primary care clinics, users, practitioners and managers in urban and rural districts in the Western Cape Province. Methods Face value evaluation of item phrasing and a combination of Delphi and Nominal Group Technique (NGT) methods with an expert panel and user focus group were used to obtain consensus on content relevant to SA. Original and new domains and items with > = 70% agreement were included in the South African version – ZA PCAT. Results All original PCAT domains achieved consensus on inclusion. One new domain, the primary healthcare (PHC) team, was added. Three of 95 original items achieved < 70% agreement, that is consensus to exclude as not relevant to SA; 19 new items were added. A few items needed minor rephrasing with local healthcare jargon. The demographic section was adapted to local socio-economic conditions. The adult PCAT was translated into isiXhosa and Afrikaans. Conclusion The PCAT is a valid measure of primary care performance in SA. The PHC team domain is an important addition, given its emphasis in PHC re-engineering. A combination of Delphi and NGT methods succeeded in obtaining consensus on a multi-domain, multi-item instrument in a resource- constrained environment. PMID:26245610
Breast cancer detection via Hu moment invariant and feedforward neural network
NASA Astrophysics Data System (ADS)
Zhang, Xiaowei; Yang, Jiquan; Nguyen, Elijah
2018-04-01
One of eight women can get breast cancer during all her life. This study used Hu moment invariant and feedforward neural network to diagnose breast cancer. With the help of K-fold cross validation, we can test the out-of-sample accuracy of our method. Finally, we found that our methods can improve the accuracy of detecting breast cancer and reduce the difficulty of judging.
ERIC Educational Resources Information Center
Benítez, Isabel; Padilla, José-Luis
2014-01-01
Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase…
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
Using deep learning for detecting gender in adult chest radiographs
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2018-03-01
In this paper, we present a method for automatically identifying the gender of an imaged person using their frontal chest x-ray images. Our work is motivated by the need to determine missing gender information in some datasets. The proposed method employs the technique of convolutional neural network (CNN) based deep learning and transfer learning to overcome the challenge of developing handcrafted features in limited data. Specifically, the method consists of four main steps: pre-processing, CNN feature extractor, feature selection, and classifier. The method is tested on a combined dataset obtained from several sources with varying acquisition quality resulting in different pre-processing steps that are applied for each. For feature extraction, we tested and compared four CNN architectures, viz., AlexNet, VggNet, GoogLeNet, and ResNet. We applied a feature selection technique, since the feature length is larger than the number of images. Two popular classifiers: SVM and Random Forest, are used and compared. We evaluated the classification performance by cross-validation and used seven performance measures. The best performer is the VggNet-16 feature extractor with the SVM classifier, with accuracy of 86.6% and ROC Area being 0.932 for 5-fold cross validation. We also discuss several misclassified cases and describe future work for performance improvement.
Zhou, Jiyun; Lu, Qin; Xu, Ruifeng; He, Yulan; Wang, Hongpeng
2017-08-29
Prediction of DNA-binding residue is important for understanding the protein-DNA recognition mechanism. Many computational methods have been proposed for the prediction, but most of them do not consider the relationships of evolutionary information between residues. In this paper, we first propose a novel residue encoding method, referred to as the Position Specific Score Matrix (PSSM) Relation Transformation (PSSM-RT), to encode residues by utilizing the relationships of evolutionary information between residues. PDNA-62 and PDNA-224 are used to evaluate PSSM-RT and two existing PSSM encoding methods by five-fold cross-validation. Performance evaluations indicate that PSSM-RT is more effective than previous methods. This validates the point that the relationship of evolutionary information between residues is indeed useful in DNA-binding residue prediction. An ensemble learning classifier (EL_PSSM-RT) is also proposed by combining ensemble learning model and PSSM-RT to better handle the imbalance between binding and non-binding residues in datasets. EL_PSSM-RT is evaluated by five-fold cross-validation using PDNA-62 and PDNA-224 as well as two independent datasets TS-72 and TS-61. Performance comparisons with existing predictors on the four datasets demonstrate that EL_PSSM-RT is the best-performing method among all the predicting methods with improvement between 0.02-0.07 for MCC, 4.18-21.47% for ST and 0.013-0.131 for AUC. Furthermore, we analyze the importance of the pair-relationships extracted by PSSM-RT and the results validates the usefulness of PSSM-RT for encoding DNA-binding residues. We propose a novel prediction method for the prediction of DNA-binding residue with the inclusion of relationship of evolutionary information and ensemble learning. Performance evaluation shows that the relationship of evolutionary information between residues is indeed useful in DNA-binding residue prediction and ensemble learning can be used to address the data imbalance issue between binding and non-binding residues. A web service of EL_PSSM-RT ( http://hlt.hitsz.edu.cn:8080/PSSM-RT_SVM/ ) is provided for free access to the biological research community.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Identifying Wrist Fracture Patients with High Accuracy by Automatic Categorization of X-ray Reports
de Bruijn, Berry; Cranney, Ann; O’Donnell, Siobhan; Martin, Joel D.; Forster, Alan J.
2006-01-01
The authors performed this study to determine the accuracy of several text classification methods to categorize wrist x-ray reports. We randomly sampled 751 textual wrist x-ray reports. Two expert reviewers rated the presence (n = 301) or absence (n = 450) of an acute fracture of wrist. We developed two information retrieval (IR) text classification methods and a machine learning method using a support vector machine (TC-1). In cross-validation on the derivation set (n = 493), TC-1 outperformed the two IR based methods and six benchmark classifiers, including Naive Bayes and a Neural Network. In the validation set (n = 258), TC-1 demonstrated consistent performance with 93.8% accuracy; 95.5% sensitivity; 92.9% specificity; and 87.5% positive predictive value. TC-1 was easy to implement and superior in performance to the other classification methods. PMID:16929046
Rudolph, Abby E; Bazzi, Angela Robertson; Fish, Sue
2016-10-01
Analyses with geographic data can be used to identify "hot spots" and "health service deserts", examine associations between proximity to services and their use, and link contextual factors with individual-level data to better understand how environmental factors influence behaviors. Technological advancements in methods for collecting this information can improve the accuracy of contextually-relevant information; however, they have outpaced the development of ethical standards and guidance, particularly for research involving populations engaging in illicit/stigmatized behaviors. Thematic analysis identified ethical considerations for collecting geographic data using different methods and the extent to which these concerns could influence study compliance and data validity. In-depth interviews with 15 Baltimore residents (6 recruited via flyers and 9 via peer-referral) reporting recent drug use explored comfort with and ethics of three methods for collecting geographic information: (1) surveys collecting self-reported addresses/cross-streets, (2) surveys using web-based maps to find/confirm locations, and (3) geographical momentary assessments (GMA), which collect spatiotemporally referenced behavioral data. Survey methods for collecting geographic data (i.e., addresses/cross-streets and web-based maps) were generally acceptable; however, participants raised confidentiality concerns regarding exact addresses for illicit/stigmatized behaviors. Concerns specific to GMA included burden of carrying/safeguarding phones and responding to survey prompts, confidentiality, discomfort with being tracked, and noncompliance with study procedures. Overall, many felt that confidentiality concerns could influence the accuracy of location information collected for sensitive behaviors and study compliance. Concerns raised by participants could result in differential study participation and/or study compliance and questionable accuracy/validity of location data for sensitive behaviors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong
2014-01-01
The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.
Küçükdeveci, Ayse A; Sahin, Hülya; Ataman, Sebnem; Griffiths, Bridget; Tennant, Alan
2004-02-15
Guidelines have been established for cross-cultural adaptation of outcome measures. However, invariance across cultures must also be demonstrated through analysis of Differential Item Functioning (DIF). This is tested in the context of a Turkish adaptation of the Health Assessment Questionnaire (HAQ). Internal construct validity of the adapted HAQ is assessed by Rasch analysis; reliability, by internal consistency and the intraclass correlation coefficient; external construct validity, by association with impairments and American College of Rheumatology functional stages. Cross-cultural validity is tested through DIF by comparison with data from the UK version of the HAQ. The adapted version of the HAQ demonstrated good internal construct validity through fit of the data to the Rasch model (mean item fit 0.205; SD 0.998). Reliability was excellent (alpha = 0.97) and external construct validity was confirmed by expected associations. DIF for culture was found in only 1 item. Cross-cultural validity was found to be sufficient for use in international studies between the UK and Turkey. Future adaptation of instruments should include analysis of DIF at the field testing stage in the adaptation process.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curchod, Basile F. E.; Martínez, Todd J., E-mail: toddjmartinez@gmail.com; SLAC National Accelerator Laboratory, Menlo Park, California 94025
2016-03-14
Full multiple spawning is a formally exact method to describe the excited-state dynamics of molecular systems beyond the Born-Oppenheimer approximation. However, it has been limited until now to the description of radiationless transitions taking place between electronic states with the same spin multiplicity. This Communication presents a generalization of the full and ab initio multiple spawning methods to both internal conversion (mediated by nonadiabatic coupling terms) and intersystem crossing events (triggered by spin-orbit coupling matrix elements) based on a spin-diabatic representation. The results of two numerical applications, a model system and the deactivation of thioformaldehyde, validate the presented formalism andmore » its implementation.« less
Curchod, Basile F. E.; Rauer, Clemens; Marquetand, Philipp; ...
2016-03-11
Full Multiple Spawning is a formally exact method to describe the excited-state dynamics of molecular systems beyond the Born-Oppenheimer approximation. However, it has been limited until now to the description of radiationless transitions taking place between electronic states with the same spin multiplicity. This Communication presents a generalization of the full and ab initio Multiple Spawning methods to both internal conversion (mediated by nonadiabatic coupling terms) and intersystem crossing events (triggered by spin-orbit coupling matrix elements) based on a spin-diabatic representation. Lastly, the results of two numerical applications, a model system and the deactivation of thioformaldehyde, validate the presented formalismmore » and its implementation.« less
Model selection and assessment for multi-species occupancy models
Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.
2016-01-01
While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia; Li, Qingchen
2017-04-01
This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.
Gutiérrez Sánchez, Daniel; Cuesta-Vargas, Antonio I
2018-04-01
Many measurements have been developed to assess the quality of death (QoD). Among these, the Quality of Dying and Death Questionnaire (QODD) is the most widely studied and best validated. Informal carers and health professionals who care for the patient during their last days of life can complete this assessment tool. The aim of the study is to carry out a cross-cultural adaptation and a psychometric analysis of the QODD for the Spanish population. The translation was performed using a double forward and backward method. An expert panel evaluated the content validity. The questionnaire was tested in a sample of 72 Spanish-speaking adult carers of deceased cancer patients. A psychometric analysis was performed to evaluate internal consistency, divergent criterion-related validity with the Mini-Suffering State Examination (MSSE) and concurrent criterion-related validity with the Palliative Outcome Scale (POS). Some items were deleted and modified to create the Spanish version of the QODD (QODD-ESP-26). The instrument was readable and acceptable. The content validity index was 0.96, suggesting that all items are relevant for the measure of the QoD. This questionnaire showed high internal consistency (Cronbach's α coefficient = 0.88). Divergent validity with MSSE (r = -0.64) and convergent validity with POS (r = -0.61) were also demonstrated. The QODD-ESP-26 is a valid and reliable instrument for the assessment of the QoD of deceased cancer patients that can be used in a clinical and research setting. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of the Persian version of the Vertigo Symptom Scale: Validity and reliability
Kamalvand, Atefeh; Ghahraman, Mansoureh Adel; Jalaie, Shohreh
2017-01-01
Background: Vertigo Symptom Scale (VSS) is a proper instrument for assessing the patient status, clarifying the symptoms, and examining the relative impact of the vertigo and anxiety on reported handicap. Our aim is the translation and cross-cultural adaptation of the VSS into Persian language (VSS-P) and investigating its validity and reliability in patients with peripheral vestibular disorders. Materials and Methods: VSS was translated into Persian. Cross-cultural adaptation was carried out on 101 patients with peripheral vestibular disorders and 34 participants with no history of vertigo. They completed the Persian versions of VSS, dizziness handicap inventory (DHI), and Beck anxiety inventory (BAI). Internal, discriminant, and convergent validities, internal consistency, and test-retest reliability were determined. Results: The VSS-P showed good face validity. Internal validity was confirmed and demonstrated the presence of two vertigo (VSS-VER) and autonomic-anxiety (VSS-AA) subscales. Significant difference between the median scores for patient and healthy groups was reported in discriminate validity (P <0.001). Convergent validity revealed high correlation between both BAI and DHI with VSS-P. There was a high test-retest reliability; with intraclass correlation coefficient of 0.89, 0.86, and 0.91 for VSS-AA, VER, and VSS-P, respectively. The internal consistency was good with Cronbach's alpha 0.90 for VER subscale, 0.86 for VSS-AA subscale, and 0.92 for the overall VSS-P. Conclusion: The Persian version of the VSS could be used clinically as a valid and reliable tool. Thus, it is a key instrument to focus on the symptoms associated with dizziness. PMID:28616045
The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).
ERIC Educational Resources Information Center
Berryman, Joan D.; Neal, W. R. Jr.
1980-01-01
Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…
Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures
NASA Astrophysics Data System (ADS)
Balagopal, R.; Prasad Rao, N.; Rokade, R. P.
2018-04-01
Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.
Huet, S; Marie, J P; Gualde, N; Robert, J
1998-12-15
Multidrug resistance (MDR) associated with overexpression of the MDR1 gene and of its product, P-glycoprotein (Pgp), plays an important role in limiting cancer treatment efficacy. Many studies have investigated Pgp expression in clinical samples of hematological malignancies but failed to give definitive conclusion on its usefulness. One convenient method for fluorescent detection of Pgp in malignant cells is flow cytometry which however gives variable results from a laboratory to another one, partly due to the lack of a reference method rigorously tested. The purpose of this technical note is to describe each step of a reference flow cytometric method. The guidelines for sample handling, staining and analysis have been established both for Pgp detection with monoclonal antibodies directed against extracellular epitopes (MRK16, UIC2 and 4E3), and for Pgp functional activity measurement with Rhodamine 123 as a fluorescent probe. Both methods have been validated on cultured cell lines and clinical samples by 12 laboratories of the French Drug Resistance Network. This cross-validated multicentric study points out crucial steps for the accuracy and reproducibility of the results, like cell viability, data analysis and expression.
Fazio, Tatiana Tatit; Singh, Anil Kumar; Kedor-Hackmann, Erika Rosa Maria; Santoro, Maria Inês Rocha Miritello
2007-03-12
Cleaning validation is an integral part of current good manufacturing practices in any pharmaceutical industry. Nowadays, azathioprine and several other pharmacologically potent pharmaceuticals are manufactured in same production area. Carefully designed cleaning validation and its evaluation can ensure that residues of azathioprine will not carry over and cross contaminate the subsequent product. The aim of this study was to validate simple analytical method for verification of residual azathioprine in equipments used in the production area and to confirm efficiency of cleaning procedure. The HPLC method was validated on a LC system using Nova-Pak C18 (3.9 mm x 150 mm, 4 microm) and methanol-water-acetic acid (20:80:1, v/v/v) as mobile phase at a flow rate of 1.0 mL min(-1). UV detection was made at 280 nm. The calibration curve was linear over a concentration range from 2.0 to 22.0 microg mL(-1) with a correlation coefficient of 0.9998. The detection limit (DL) and quantitation limit (QL) were 0.09 and 0.29 microg mL(-1), respectively. The intra-day and inter-day precision expressed as relative standard deviation (R.S.D.) were below 2.0%. The mean recovery of method was 99.19%. The mean extraction-recovery from manufacturing equipments was 83.5%. The developed UV spectrophotometric method could only be used as limit method to qualify or reject cleaning procedure in production area. Nevertheless, the simplicity of spectrophotometric method makes it useful for routine analysis of azathioprine residues on cleaned surface and as an alternative to proposed HPLC method.
Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M
2018-02-01
Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, H; Liu, T; Xu, X
Purpose: There are clinical decision challenges to select optimal treatment positions for left-sided breast cancer patients—supine free breathing (FB), supine Deep Inspiration Breath Hold (DIBH) and prone free breathing (prone). Physicians often make the decision based on experiences and trials, which might not always result optimal OAR doses. We herein propose a mathematical model to predict the lowest OAR doses among these three positions, providing a quantitative tool for corresponding clinical decision. Methods: Patients were scanned in FB, DIBH, and prone positions under an IRB approved protocol. Tangential beam plans were generated for each position, and OAR doses were calculated.more » The position with least OAR doses is defined as the optimal position. The following features were extracted from each scan to build the model: heart, ipsilateral lung, breast volume, in-field heart, ipsilateral lung volume, distance between heart and target, laterality of heart, and dose to heart and ipsilateral lung. Principal Components Analysis (PCA) was applied to remove the co-linearity of the input data and also to lower the data dimensionality. Feature selection, another method to reduce dimensionality, was applied as a comparison. Support Vector Machine (SVM) was then used for classification. Thirtyseven patient data were acquired; up to now, five patient plans were available. K-fold cross validation was used to validate the accuracy of the classifier model with small training size. Results: The classification results and K-fold cross validation demonstrated the model is capable of predicting the optimal position for patients. The accuracy of K-fold cross validations has reached 80%. Compared to PCA, feature selection allows causal features of dose to be determined. This provides more clinical insights. Conclusion: The proposed classification system appeared to be feasible. We are generating plans for the rest of the 37 patient images, and more statistically significant results are to be presented.« less
Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming
2014-12-01
Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2009-01-01
In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…
Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław
2015-01-01
The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.
Knowledge of the Effects of Indoor Air Quality on Health among Women in Jordan
ERIC Educational Resources Information Center
Madanat, Hala; Barnes, Michael D.; Cole, Eugene C.
2008-01-01
Objective: To assess the extent of knowledge about symptoms relating to respiratory illnesses and home environments among a random sample of 200 urban Jordanian women. Method: This customized, validated, cross-sectional questionnaire evaluated the knowledge of these women about the association between the indoor environment and health, the…
FDDS: A Cross Validation Study.
ERIC Educational Resources Information Center
Sawyer, Judy Parsons
The Family Drawing Depression Scale (FDDS) was created by Wright and McIntyre to provide a clear and reliable scoring method for the Kinetic Family Drawing as a procedure for detecting depression. A study was conducted to confirm the value of the FDDS as a systematic tool for interpreting family drawings with populations of depressed individuals.…
ERIC Educational Resources Information Center
Roberts, Simon J.; Fairclough, Stuart J.; Ridgers, Nicola D.; Porteous, Conor
2013-01-01
Objective: The purpose of the present study was to assess children's physical activity, social play behaviour, activity type and social interactions during elementary school recess using a pre-validated systematic observation system. Design: Cross-sectional. Setting: Two elementary schools located in Merseyside, England. Method: Fifty-six…
SERS quantitative urine creatinine measurement of human subject
NASA Astrophysics Data System (ADS)
Wang, Tsuei Lian; Chiang, Hui-hua K.; Lu, Hui-hsin; Hung, Yung-da
2005-03-01
SERS method for biomolecular analysis has several potentials and advantages over traditional biochemical approaches, including less specimen contact, non-destructive to specimen, and multiple components analysis. Urine is an easily available body fluid for monitoring the metabolites and renal function of human body. We developed surface-enhanced Raman scattering (SERS) technique using 50nm size gold colloidal particles for quantitative human urine creatinine measurements. This paper shows that SERS shifts of creatinine (104mg/dl) in artificial urine is from 1400cm-1 to 1500cm-1 which was analyzed for quantitative creatinine measurement. Ten human urine samples were obtained from ten healthy persons and analyzed by the SERS technique. Partial least square cross-validation (PLSCV) method was utilized to obtain the estimated creatinine concentration in clinically relevant (55.9mg/dl to 208mg/dl) concentration range. The root-mean square error of cross validation (RMSECV) is 26.1mg/dl. This research demonstrates the feasibility of using SERS for human subject urine creatinine detection, and establishes the SERS platform technique for bodily fluids measurement.
Rosowsky, Erlene; Young, Alexander S; Malloy, Mary C; van Alphen, S P J; Ellison, James M
2018-03-01
The Delphi method is a consensus-building technique using expert opinion to formulate a shared framework for understanding a topic with limited empirical support. This cross-validation study replicates one completed in the Netherlands and Belgium, and explores US experts' views on the diagnosis and treatment of older adults with personality disorders (PD). Twenty-one geriatric PD experts participated in a Delphi survey addressing diagnosis and treatment of older adults with PD. The European survey was translated and administered electronically. First-round consensus was reached for 16 out of 18 items relevant to diagnosis and specific mental health programs for personality disorders in older adults. Experts agreed on the usefulness of establishing criteria for specific types of treatments. The majority of psychologists did not initially agree on the usefulness of pharmacotherapy. Expert consensus was reached following two subsequent rounds after clarification addressing medication use. Study results suggest consensus among regarding psychosocial treatments. Limited acceptance amongst US psychologists about the suitability of pharmacotherapy for late-life PDs contrasted with the views expressed by experts surveyed in Netherlands and Belgium studies.
On the accuracy of aerosol photoacoustic spectrometer calibrations using absorption by ozone
NASA Astrophysics Data System (ADS)
Davies, Nicholas W.; Cotterell, Michael I.; Fox, Cathryn; Szpek, Kate; Haywood, Jim M.; Langridge, Justin M.
2018-04-01
In recent years, photoacoustic spectroscopy has emerged as an invaluable tool for the accurate measurement of light absorption by atmospheric aerosol. Photoacoustic instruments require calibration, which can be achieved by measuring the photoacoustic signal generated by known quantities of gaseous ozone. Recent work has questioned the validity of this approach at short visible wavelengths (404 nm), indicating systematic calibration errors of the order of a factor of 2. We revisit this result and test the validity of the ozone calibration method using a suite of multipass photoacoustic cells operating at wavelengths 405, 514 and 658 nm. Using aerosolised nigrosin with mobility-selected diameters in the range 250-425 nm, we demonstrate excellent agreement between measured and modelled ensemble absorption cross sections at all wavelengths, thus demonstrating the validity of the ozone-based calibration method for aerosol photoacoustic spectroscopy at visible wavelengths.
[Validation of three screening tests used for early detection of cervical cancer].
Rodriguez-Reyes, Esperanza Rosalba; Cerda-Flores, Ricardo M; Quiñones-Pérez, Juan M; Cortés-Gutiérrez, Elva I
2008-01-01
to evaluate the validity (sensitivity, specificity, and accuracy) of three screening methods used in the early detection of the cervical carcinoma versus the histopathology diagnosis. a selected sample of 107 women attended in the Opportune Detection of Cervicouterine Cancer Program in the Hospital de Zona 46, Instituto Mexicano del Seguro Social in Durango, during the 2003 was included. The application of Papa-nicolaou, acetic acid test, and molecular detection of human papillomavirus, and histopatholgy diagnosis were performed in all the patients at the time of the gynecological exam. The detection and tipification of the human papillomavirus was performed by polymerase chain reaction (PCR) and analysis of polymorphisms of length of restriction fragments (RFLP). Histopathology diagnosis was considered the gold standard. The evaluation of the validity was carried out by the Bayesian method for diagnosis test. the positive cases for acetic acid test, Papanicolaou, and PCR were 47, 22, and 19. The accuracy values were 0.70, 0.80 and 0.99, respectively. since the molecular method showed a greater validity in the early detection of the cervical carcinoma we considered of vital importance its implementation in suitable programs of Opportune Detection of Cervicouterino Cancer Program in Mexico. However, in order to validate this conclusion, cross-sectional studies in different region of country must be carried out.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
Guidelines To Validate Control of Cross-Contamination during Washing of Fresh-Cut Leafy Vegetables.
Gombas, D; Luo, Y; Brennan, J; Shergill, G; Petran, R; Walsh, R; Hau, H; Khurana, K; Zomorodi, B; Rosen, J; Varley, R; Deng, K
2017-02-01
The U.S. Food and Drug Administration requires food processors to implement and validate processes that will result in significantly minimizing or preventing the occurrence of hazards that are reasonably foreseeable in food production. During production of fresh-cut leafy vegetables, microbial contamination that may be present on the product can spread throughout the production batch when the product is washed, thus increasing the risk of illnesses. The use of antimicrobials in the wash water is a critical step in preventing such water-mediated cross-contamination; however, many factors can affect antimicrobial efficacy in the production of fresh-cut leafy vegetables, and the procedures for validating this key preventive control have not been articulated. Producers may consider three options for validating antimicrobial washing as a preventive control for cross-contamination. Option 1 involves the use of a surrogate for the microbial hazard and the demonstration that cross-contamination is prevented by the antimicrobial wash. Option 2 involves the use of antimicrobial sensors and the demonstration that a critical antimicrobial level is maintained during worst-case operating conditions. Option 3 validates the placement of the sensors in the processing equipment with the demonstration that a critical antimicrobial level is maintained at all locations, regardless of operating conditions. These validation options developed for fresh-cut leafy vegetables may serve as examples for validating processes that prevent cross-contamination during washing of other fresh produce commodities.
A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures
NASA Astrophysics Data System (ADS)
Youssef, Rasha M.; Maher, Hadir M.
2008-10-01
A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Prediction of psychosis across protocols and risk cohorts using automated language analysis.
Corcoran, Cheryl M; Carrillo, Facundo; Fernández-Slezak, Diego; Bedi, Gillinder; Klim, Casimir; Javitt, Daniel C; Bearden, Carrie E; Cecchi, Guillermo A
2018-02-01
Language and speech are the primary source of data for psychiatrists to diagnose and treat mental disorders. In psychosis, the very structure of language can be disturbed, including semantic coherence (e.g., derailment and tangentiality) and syntactic complexity (e.g., concreteness). Subtle disturbances in language are evident in schizophrenia even prior to first psychosis onset, during prodromal stages. Using computer-based natural language processing analyses, we previously showed that, among English-speaking clinical (e.g., ultra) high-risk youths, baseline reduction in semantic coherence (the flow of meaning in speech) and in syntactic complexity could predict subsequent psychosis onset with high accuracy. Herein, we aimed to cross-validate these automated linguistic analytic methods in a second larger risk cohort, also English-speaking, and to discriminate speech in psychosis from normal speech. We identified an automated machine-learning speech classifier - comprising decreased semantic coherence, greater variance in that coherence, and reduced usage of possessive pronouns - that had an 83% accuracy in predicting psychosis onset (intra-protocol), a cross-validated accuracy of 79% of psychosis onset prediction in the original risk cohort (cross-protocol), and a 72% accuracy in discriminating the speech of recent-onset psychosis patients from that of healthy individuals. The classifier was highly correlated with previously identified manual linguistic predictors. Our findings support the utility and validity of automated natural language processing methods to characterize disturbances in semantics and syntax across stages of psychotic disorder. The next steps will be to apply these methods in larger risk cohorts to further test reproducibility, also in languages other than English, and identify sources of variability. This technology has the potential to improve prediction of psychosis outcome among at-risk youths and identify linguistic targets for remediation and preventive intervention. More broadly, automated linguistic analysis can be a powerful tool for diagnosis and treatment across neuropsychiatry. © 2018 World Psychiatric Association.
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
2018-03-03
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175–183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave. The improved model contains six of the 10 terms inmore » the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. In conclusion, compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value.« less
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
2018-05-30
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175-183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave ). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave . The improved model contains six of the 10 terms in the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. Compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value. Copyright © 2018 Elsevier B.V. All rights reserved.
Near infrared spectroscopy for prediction of antioxidant compounds in the honey.
Escuredo, Olga; Seijo, M Carmen; Salvador, Javier; González-Martín, M Inmaculada
2013-12-15
The selection of antioxidant variables in honey is first time considered applying the near infrared (NIR) spectroscopic technique. A total of 60 honey samples were used to develop the calibration models using the modified partial least squares (MPLS) regression method and 15 samples were used for external validation. Calibration models on honey matrix for the estimation of phenols, flavonoids, vitamin C, antioxidant capacity (DPPH), oxidation index and copper using near infrared (NIR) spectroscopy has been satisfactorily obtained. These models were optimised by cross-validation, and the best model was evaluated according to multiple correlation coefficient (RSQ), standard error of cross-validation (SECV), ratio performance deviation (RPD) and root mean standard error (RMSE) in the prediction set. The result of these statistics suggested that the equations developed could be used for rapid determination of antioxidant compounds in honey. This work shows that near infrared spectroscopy can be considered as rapid tool for the nondestructive measurement of antioxidant constitutes as phenols, flavonoids, vitamin C and copper and also the antioxidant capacity in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.
Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆
Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh
2011-01-01
Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175–183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave. The improved model contains six of the 10 terms inmore » the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. In conclusion, compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value.« less
Longobardi, Francesco; Innamorato, Valentina; Di Gioia, Annalisa; Ventrella, Andrea; Lippolis, Vincenzo; Logrieco, Antonio F; Catucci, Lucia; Agostiano, Angela
2017-12-15
Lentil samples coming from two different countries, i.e. Italy and Canada, were analysed using untargeted 1 H NMR fingerprinting in combination with chemometrics in order to build models able to classify them according to their geographical origin. For such aim, Soft Independent Modelling of Class Analogy (SIMCA), k-Nearest Neighbor (k-NN), Principal Component Analysis followed by Linear Discriminant Analysis (PCA-LDA) and Partial Least Squares-Discriminant Analysis (PLS-DA) were applied to the NMR data and the results were compared. The best combination of average recognition (100%) and cross-validation prediction abilities (96.7%) was obtained for the PCA-LDA. All the statistical models were validated both by using a test set and by carrying out a Monte Carlo Cross Validation: the obtained performances were found to be satisfying for all the models, with prediction abilities higher than 95% demonstrating the suitability of the developed methods. Finally, the metabolites that mostly contributed to the lentil discrimination were indicated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-01-01
Objectives This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). Methods The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. Findings The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. Conclusion The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires. PMID:16995935
Sivan, Sree Kanth; Manga, Vijjulatha
2012-02-01
Multiple receptors conformation docking (MRCD) and clustering of dock poses allows seamless incorporation of receptor binding conformation of the molecules on wide range of ligands with varied structural scaffold. The accuracy of the approach was tested on a set of 120 cyclic urea molecules having HIV-1 protease inhibitory activity using 12 high resolution X-ray crystal structures and one NMR resolved conformation of HIV-1 protease extracted from protein data bank. A cross validation was performed on 25 non-cyclic urea HIV-1 protease inhibitor having varied structures. The comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models were generated using 60 molecules in the training set by applying leave one out cross validation method, r (loo) (2) values of 0.598 and 0.674 for CoMFA and CoMSIA respectively and non-cross validated regression coefficient r(2) values of 0.983 and 0.985 were obtained for CoMFA and CoMSIA respectively. The predictive ability of these models was determined using a test set of 60 cyclic urea molecules that gave predictive correlation (r (pred) (2) ) of 0.684 and 0.64 respectively for CoMFA and CoMSIA indicating good internal predictive ability. Based on this information 25 non-cyclic urea molecules were taken as a test set to check the external predictive ability of these models. This gave remarkable out come with r (pred) (2) of 0.61 and 0.53 for CoMFA and CoMSIA respectively. The results invariably show that this method is useful for performing 3D QSAR analysis on molecules having different structural motifs.
Reliable Digit Span: A Systematic Review and Cross-Validation Study
ERIC Educational Resources Information Center
Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.
2012-01-01
Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…
NASA Astrophysics Data System (ADS)
Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.
2018-03-01
Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.
Kim, Jin Goo; Lee, Joong Yub; Seo, Seung Suk; Choi, Choong Hyeok; Lee, Myung Chul
2013-01-01
Purpose To perform a cross-cultural adaptation and to test the measurement properties of the Korean version of International Knee Documentation Committee (K-IKDC) Subjective Knee Form. Materials and Methods According to the guidelines for cross-cultural adaptation, translation and backward translation of the English version of the IKDC Subjective Knee Form were performed. After translation into the Korean version, 150 patients who had knee-related problems were asked to complete the K-IKDC, Lysholm score, and Short Form-36 (SF-36). Of these patients, 126 were retested 2 weeks later to evaluate test-retest reliability, and 104 were recruited 3 months later to evaluate responsiveness. Construct validity was analyzed by investigating the correlation with Lysholm score and SF-36; content validity was also evaluated. Standardized mean response was calculated for evaluating responsiveness. Results The test-retest reliability proved excellent with a high value for the intraclass correlation coefficient (r=0.94). The internal consistency was strong (Cronbach's α=0.91). Good content validity with absence of floor not ceiling effects and good convergent and divergent validity were observed. Moderate responsiveness was shown (standardized mean response=0.689). Conclusions The K-IKDC demonstrated good measurement properties. We suggest that this instrument is an excellent evaluation instrument that can be used for Korean patients with knee-related injuries. PMID:24032098
Rational selection of training and test sets for the development of validated QSAR models
NASA Astrophysics Data System (ADS)
Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander
2003-02-01
Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.
Blind system identification of two-thermocouple sensor based on cross-relation method.
Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian
2018-03-01
In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.
Blind system identification of two-thermocouple sensor based on cross-relation method
NASA Astrophysics Data System (ADS)
Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian
2018-03-01
In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.
Li, Qi-Gang; He, Yong-Han; Wu, Huan; Yang, Cui-Ping; Pu, Shao-Yan; Fan, Song-Qing; Jiang, Li-Ping; Shen, Qiu-Shuo; Wang, Xiao-Xiong; Chen, Xiao-Qiong; Yu, Qin; Li, Ying; Sun, Chang; Wang, Xiangting; Zhou, Jumin; Li, Hai-Peng; Chen, Yong-Bin; Kong, Qing-Peng
2017-01-01
Heterogeneity in transcriptional data hampers the identification of differentially expressed genes (DEGs) and understanding of cancer, essentially because current methods rely on cross-sample normalization and/or distribution assumption-both sensitive to heterogeneous values. Here, we developed a new method, Cross-Value Association Analysis (CVAA), which overcomes the limitation and is more robust to heterogeneous data than the other methods. Applying CVAA to a more complex pan-cancer dataset containing 5,540 transcriptomes discovered numerous new DEGs and many previously rarely explored pathways/processes; some of them were validated, both in vitro and in vivo , to be crucial in tumorigenesis, e.g., alcohol metabolism ( ADH1B ), chromosome remodeling ( NCAPH ) and complement system ( Adipsin ). Together, we present a sharper tool to navigate large-scale expression data and gain new mechanistic insights into tumorigenesis.
Tsang, B; Stothers, L; Macnab, A; Lazare, D; Nigro, M
2016-03-01
Validated questionnaires are increasingly the preferred method used to obtain historical information. Specialized questionnaires exist validated for patients with neurogenic disease including neurogenic bladder. Those currently available are systematically reviewed and their potential for clinical and research use are described. A systematic search via Medline and PubMed using the key terms questionnaire(s) crossed with Multiple Sclerosis (MS) and Spinal Cord Injury (SCI) for the years 1946 to January 22, 2014 inclusive. Additional articles were selected from review of references in the publications identified. Only peer reviewed articles published in English were included. 18 questionnaires exist validated for patients with neurogenic bladder; 14 related to MS, 3 for SCI, and 1 for neurogenic bladder in general; with 4 cross-validated in both MS and SCI. All 18 are validated for both male and female patients; 59% are available only in English. The domains of psychological impact and physical function are represented in 71% and 76% of questionnaires, respectively. None for the female population included elements to measure symptoms of prolapse. The last decade has seen an expansion of validated questionnaires to document bladder symptoms in neurogenic disease. Disease specific instruments are available for incorporation into the clinical setting for MS and SCI patients with neurogenic bladder. The availability of caregiver and interview options enhances suitability in clinical practice as they can be adapted to various extents of disability. Future developments should include expanded language validation to the top 10 global languages reported by the World Health Organization. © 2015 Wiley Periodicals, Inc.
The Arthroscopic Surgical Skill Evaluation Tool (ASSET).
Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T
2013-06-01
Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Bilek, Edda; Ruf, Matthias; Schäfer, Axel; Akdeniz, Ceren; Calhoun, Vince D; Schmahl, Christian; Demanuele, Charmaine; Tost, Heike; Kirsch, Peter; Meyer-Lindenberg, Andreas
2015-04-21
Social interactions are fundamental for human behavior, but the quantification of their neural underpinnings remains challenging. Here, we used hyperscanning functional MRI (fMRI) to study information flow between brains of human dyads during real-time social interaction in a joint attention paradigm. In a hardware setup enabling immersive audiovisual interaction of subjects in linked fMRI scanners, we characterize cross-brain connectivity components that are unique to interacting individuals, identifying information flow between the sender's and receiver's temporoparietal junction. We replicate these findings in an independent sample and validate our methods by demonstrating that cross-brain connectivity relates to a key real-world measure of social behavior. Together, our findings support a central role of human-specific cortical areas in the brain dynamics of dyadic interactions and provide an approach for the noninvasive examination of the neural basis of healthy and disturbed human social behavior with minimal a priori assumptions.
Triaging Patient Complaints: Monte Carlo Cross-Validation of Six Machine Learning Classifiers
Cooper, William O; Catron, Thomas F; Karrass, Jan; Zhang, Zhe; Singh, Munindar P
2017-01-01
Background Unsolicited patient complaints can be a useful service recovery tool for health care organizations. Some patient complaints contain information that may necessitate further action on the part of the health care organization and/or the health care professional. Current approaches depend on the manual processing of patient complaints, which can be costly, slow, and challenging in terms of scalability. Objective The aim of this study was to evaluate automatic patient triage, which can potentially improve response time and provide much-needed scale, thereby enhancing opportunities to encourage physicians to self-regulate. Methods We implemented a comparison of several well-known machine learning classifiers to detect whether a complaint was associated with a physician or his/her medical practice. We compared these classifiers using a real-life dataset containing 14,335 patient complaints associated with 768 physicians that was extracted from patient complaints collected by the Patient Advocacy Reporting System developed at Vanderbilt University and associated institutions. We conducted a 10-splits Monte Carlo cross-validation to validate our results. Results We achieved an accuracy of 82% and F-score of 81% in correctly classifying patient complaints with sensitivity and specificity of 0.76 and 0.87, respectively. Conclusions We demonstrate that natural language processing methods based on modeling patient complaint text can be effective in identifying those patient complaints requiring physician action. PMID:28760726
Budget Online Learning Algorithm for Least Squares SVM.
Jian, Ling; Shen, Shuqian; Li, Jundong; Liang, Xijun; Li, Lei
2017-09-01
Batch-mode least squares support vector machine (LSSVM) is often associated with unbounded number of support vectors (SVs'), making it unsuitable for applications involving large-scale streaming data. Limited-scale LSSVM, which allows efficient updating, seems to be a good solution to tackle this issue. In this paper, to train the limited-scale LSSVM dynamically, we present a budget online LSSVM (BOLSSVM) algorithm. Methodologically, by setting a fixed budget for SVs', we are able to update the LSSVM model according to the updated SVs' set dynamically without retraining from scratch. In particular, when a new small chunk of SVs' substitute for the old ones, the proposed algorithm employs a low rank correction technology and the Sherman-Morrison-Woodbury formula to compute the inverse of saddle point matrix derived from the LSSVM's Karush-Kuhn-Tucker (KKT) system, which, in turn, updates the LSSVM model efficiently. In this way, the proposed BOLSSVM algorithm is especially useful for online prediction tasks. Another merit of the proposed BOLSSVM is that it can be used for k -fold cross validation. Specifically, compared with batch-mode learning methods, the computational complexity of the proposed BOLSSVM method is significantly reduced from O(n 4 ) to O(n 3 ) for leave-one-out cross validation with n training samples. The experimental results of classification and regression on benchmark data sets and real-world applications show the validity and effectiveness of the proposed BOLSSVM algorithm.
Scott, Jan; Geoffroy, Pierre Alexis; Sportiche, Sarah; Brichant-Petit-Jean, Clara; Gard, Sebastien; Kahn, Jean-Pierre; Azorin, Jean-Michel; Henry, Chantal; Etain, Bruno; Bellivier, Frank
2017-01-15
It is increasingly recognised that reliable and valid assessments of lithium response are needed in order to target more efficiently the use of this medication in bipolar disorders (BD) and to identify genotypes, endophenotypes and biomarkers of response. In a large, multi-centre, clinically representative sample of 300 cases of BD, we assess external clinical validators of lithium response phenotypes as defined using three different recommended approaches to scoring the Alda lithium response scale. The scale comprises an A scale (rating lithium response) and a B scale (assessing confounders). Analysis of the two continuous scoring methods (A scale score minus the B scale score, or A scale score in those with a low B scale score) demonstrated that 21-23% of the explained variance in lithium response was accounted for by a positive family history of BD I and the early introduction of lithium. Categorical definitions of response suggest poor response is also associated with a positive history of alcohol and/or substance use comorbidities. High B scale scores were significantly associated with longer duration of illness prior to receiving lithium and the presence of psychotic symptoms. The original sample was not recruited specifically to study lithium response. The Alda scale is designed to assess response retrospectively. This cross-validation study identifies different clinical phenotypes of lithium response when defined by continuous or categorical measures. Future clinical, genetic and biomarker studies should report both the findings and the method employed to assess lithium response according to the Alda scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson’s Disease
San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A.; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B.; Lipton, Richard B.; Pullman, Seth; Saunders-Pullman, Rachel
2016-01-01
Introduction Pre-clinical markers of Parkinson’s Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. Methods 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. Results All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Conclusion Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD. PMID:27732597
Vork, L; Keszthelyi, D; Mujagic, Z; Kruimel, J W; Leue, C; Pontén, I; Törnblom, H; Simrén, M; Albu-Soda, A; Aziz, Q; Corsetti, M; Holvoet, L; Tack, J; Rao, S S; van Os, J; Quetglas, E G; Drossman, D A; Masclee, A A M
2018-03-01
End-of-day questionnaires, which are considered the gold standard for assessing abdominal pain and other gastrointestinal (GI) symptoms in irritable bowel syndrome (IBS), are influenced by recall and ecological bias. The experience sampling method (ESM) is characterized by random and repeated assessments in the natural state and environment of a subject, and herewith overcomes these limitations. This report describes the development of a patient-reported outcome measure (PROM) based on the ESM principle, taking into account content validity and cross-cultural adaptation. Focus group interviews with IBS patients and expert meetings with international experts in the fields of neurogastroenterology & motility and pain were performed in order to select the items for the PROM. Forward-and-back translation and cognitive interviews were performed to adapt the instrument for the use in different countries and to assure on patients' understanding with the final items. Focus group interviews revealed 42 items, categorized into five domains: physical status, defecation, mood and psychological factors, context and environment, and nutrition and drug use. Experts reduced the number of items to 32 and cognitive interviewing after translation resulted in a few slight adjustments regarding linguistic issues, but not regarding content of the items. An ESM-based PROM, suitable for momentary assessment of IBS symptom patterns was developed, taking into account content validity and cross-cultural adaptation. This PROM will be implemented in a specifically designed smartphone application and further validation in a multicenter setting will follow. © 2017 John Wiley & Sons Ltd.
Psychiatric diagnosis – is it universal or relative to culture?
Canino, Glorisa; Alegría, Margarita
2009-01-01
Background There is little consensus on the extent to which psychiatric disorders or syndromes are universal or the extent to which they differ on their core definitions and constellation of symptoms as a result of cultural or contextual factors. This controversy continues due to the lack of biological markers, imprecise measurement and the lack of a gold standard for validating most psychiatric conditions. Method Empirical studies were used to present evidence in favor of or against a universalist or relativistic view of child psychiatric disorders using a model developed by Robins and Guze to determine the validity of psychiatric disorders. Results The prevalence of some of the most common specific disorders and syndromes as well as its risk and protective factors vary across cultures, yet comorbid patterns and response to treatments vary little across cultures. Cross-cultural longitudinal data on outcomes is equivocal. Conclusions The cross-cultural validity of child disorders may vary drastically depending on the disorder, but empirical evidence that attests for the cross-cultural validity of diagnostic criteria for each child disorder is lacking. There is a need for studies that investigate the extent to which gene–environment interactions are related to specific disorders across cultures. Clinicians are urged to consider culture and context in determining the way in which children’s psychopathology may be manifested independent of their views. Recommendations for the upcoming classificatory system are provided so that practical or theoretical considerations are addressed about how culture and ethnic issues affect the assessment or treatment of specific disorders in children. PMID:18333929
Zhou, Zhen; Wang, Jian-Bao; Zang, Yu-Feng; Pan, Gang
2018-01-01
Classification approaches have been increasingly applied to differentiate patients and normal controls using resting-state functional magnetic resonance imaging data (RS-fMRI). Although most previous classification studies have reported promising accuracy within individual datasets, achieving high levels of accuracy with multiple datasets remains challenging for two main reasons: high dimensionality, and high variability across subjects. We used two independent RS-fMRI datasets (n = 31, 46, respectively) both with eyes closed (EC) and eyes open (EO) conditions. For each dataset, we first reduced the number of features to a small number of brain regions with paired t-tests, using the amplitude of low frequency fluctuation (ALFF) as a metric. Second, we employed a new method for feature extraction, named the PAIR method, examining EC and EO as paired conditions rather than independent conditions. Specifically, for each dataset, we obtained EC minus EO (EC—EO) maps of ALFF from half of subjects (n = 15 for dataset-1, n = 23 for dataset-2) and obtained EO—EC maps from the other half (n = 16 for dataset-1, n = 23 for dataset-2). A support vector machine (SVM) method was used for classification of EC RS-fMRI mapping and EO mapping. The mean classification accuracy of the PAIR method was 91.40% for dataset-1, and 92.75% for dataset-2 in the conventional frequency band of 0.01–0.08 Hz. For cross-dataset validation, we applied the classifier from dataset-1 directly to dataset-2, and vice versa. The mean accuracy of cross-dataset validation was 94.93% for dataset-1 to dataset-2 and 90.32% for dataset-2 to dataset-1 in the 0.01–0.08 Hz range. For the UNPAIR method, classification accuracy was substantially lower (mean 69.89% for dataset-1 and 82.97% for dataset-2), and was much lower for cross-dataset validation (64.69% for dataset-1 to dataset-2 and 64.98% for dataset-2 to dataset-1) in the 0.01–0.08 Hz range. In conclusion, for within-group design studies (e.g., paired conditions or follow-up studies), we recommend the PAIR method for feature extraction. In addition, dimensionality reduction with strong prior knowledge of specific brain regions should also be considered for feature selection in neuroimaging studies. PMID:29375288
Broadband computation of the scattering coefficients of infinite arbitrary cylinders.
Blanchard, Cédric; Guizal, Brahim; Felbacq, Didier
2012-07-01
We employ a time-domain method to compute the near field on a contour enclosing infinitely long cylinders of arbitrary cross section and constitution. We therefore recover the cylindrical Hankel coefficients of the expansion of the field outside the circumscribed circle of the structure. The recovered coefficients enable the wideband analysis of complex systems, e.g., the determination of the radar cross section becomes straightforward. The prescription for constructing such a numerical tool is provided in great detail. The method is validated by computing the scattering coefficients for a homogeneous circular cylinder illuminated by a plane wave, a problem for which an analytical solution exists. Finally, some radiation properties of an optical antenna are examined by employing the proposed technique.
Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T
2013-07-02
Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.
ERIC Educational Resources Information Center
Awang-Hashim, Rosna; Thaliah, Rajaletchumi; Kaur, Amrita
2017-01-01
Purpose: The cross-cultural significance of autonomy within self-determination theory is divisive on universal significance. This paper aims to report a sequential exploratory mixed methods study conducted to construct and validate a scale to investigate how, in Malaysian context, the construct of autonomy is conceptualized in comparison with the…
Comparing simple respiration models for eddy flux and dynamic chamber data
Andrew D. Richardson; Bobby H. Braswell; David Y. Hollinger; Prabir Burman; Eric A. Davidson; Robert S. Evans; Lawrence B. Flanagan; J. William Munger; Kathleen Savage; Shawn P. Urbanski; Steven C. Wofsy
2006-01-01
Selection of an appropriate model for respiration (R) is important for accurate gap-filling of CO2 flux data, and for partitioning measurements of net ecosystem exchange (NEE) to respiration and gross ecosystem exchange (GEE). Using cross-validation methods and a version of Akaike's Information Criterion (AIC), we evaluate a wide range of...
ERIC Educational Resources Information Center
Heo, K. H.; Squires, J.; Yovanoff, P.
2008-01-01
Background: Accurate and efficient developmental screening measures are critical for early identification of developmental problems; however, few reliable and valid tests are available in Korea as well as other countries outside the USA. The Ages and Stages Questionnaires (ASQ) was chosen for study with young children in Korea. Methods: The ASQ…
Cross-Validation of Mental Health Recovery Measures in a Hong Kong Chinese Sample
ERIC Educational Resources Information Center
Ye, Shengquan; Pan, Jia-Yan; Wong, Daniel Fu Keung; Bola, John Robert
2013-01-01
Objectives: The concept of recovery has begun shifting mental health service delivery from a medical perspective toward a client-centered recovery orientation. This shift is also beginning in Hong Kong, but its development is hampered by a dearth of available measures in Chinese. Method: This article translates two measures of recovery (mental…
Rates of Physical Activity among Appalachian Adolescents in Ohio
ERIC Educational Resources Information Center
Hortz, Brian; Stevens, Emily; Holden, Becky; Petosa, R. Lingyak
2009-01-01
Purpose: The purpose of this study was to describe the physical activity behavior of high school students living in the Appalachian region of Ohio. Methods: A cross-sectional sample of 1,024 subjects from 11 schools in Appalachian Ohio was drawn. Previously validated instruments were used to measure physical activity behavior over 7 days.…
ERIC Educational Resources Information Center
Hitchcock, John H.; Sarkar, Sreeroopa; Nastasi, Bonnie; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka
2006-01-01
Despite on-going calls for developing cultural competency among mental health practitioners, few assessment instruments consider cultural variation in psychological constructs. To meet the challenge of developing measures for minority and international students, it is necessary to account for the influence culture may have on the latent constructs…
ERIC Educational Resources Information Center
Jordans, M. J. D.; Komproe, I. H.; Tol, W. A.; De Jong, J. T. V. M.
2009-01-01
Background: Large-scale psychosocial interventions in complex emergencies call for a screening procedure to identify individuals at risk. To date there are no screening instruments that are developed within low- and middle-income countries and validated for that purpose. The present study assesses the cross-cultural validity of the brief,…
Lippolis, Vincenzo; Ferrara, Massimo; Cervellieri, Salvatore; Damascelli, Anna; Epifani, Filomena; Pascale, Michelangelo; Perrone, Giancarlo
2016-02-02
The availability of rapid diagnostic methods for monitoring ochratoxigenic species during the seasoning processes for dry-cured meats is crucial and constitutes a key stage in order to prevent the risk of ochratoxin A (OTA) contamination. A rapid, easy-to-perform and non-invasive method using an electronic nose (e-nose) based on metal oxide semiconductors (MOS) was developed to discriminate dry-cured meat samples in two classes based on the fungal contamination: class P (samples contaminated by OTA-producing Penicillium strains) and class NP (samples contaminated by OTA non-producing Penicillium strains). Two OTA-producing strains of Penicillium nordicum and two OTA non-producing strains of Penicillium nalgiovense and Penicillium salamii, were tested. The feasibility of this approach was initially evaluated by e-nose analysis of 480 samples of both Yeast extract sucrose (YES) and meat-based agar media inoculated with the tested Penicillium strains and incubated up to 14 days. The high recognition percentages (higher than 82%) obtained by Discriminant Function Analysis (DFA), either in calibration and cross-validation (leave-more-out approach), for both YES and meat-based samples demonstrated the validity of the used approach. The e-nose method was subsequently developed and validated for the analysis of dry-cured meat samples. A total of 240 e-nose analyses were carried out using inoculated sausages, seasoned by a laboratory-scale process and sampled at 5, 7, 10 and 14 days. DFA provided calibration models that permitted discrimination of dry-cured meat samples after only 5 days of seasoning with mean recognition percentages in calibration and cross-validation of 98 and 88%, respectively. A further validation of the developed e-nose method was performed using 60 dry-cured meat samples produced by an industrial-scale seasoning process showing a total recognition percentage of 73%. The pattern of volatile compounds of dry-cured meat samples was identified and characterized by a developed HS-SPME/GC-MS method. Seven volatile compounds (2-methyl-1-butanol, octane, 1R-α-pinene, d-limonene, undecane, tetradecanal, 9-(Z)-octadecenoic acid methyl ester) allowed discrimination between dry-cured meat samples of classes P and NP. These results demonstrate that MOS-based electronic nose can be a useful tool for a rapid screening in preventing OTA contamination in the cured meat supply chain. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2017-12-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2018-01-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less
Mortality risk score prediction in an elderly population using machine learning.
Rose, Sherri
2013-03-01
Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.
Estimation of geotechnical parameters on the basis of geophysical methods and geostatistics
NASA Astrophysics Data System (ADS)
Brom, Aleksander; Natonik, Adrianna
2017-12-01
The paper presents possible implementation of ordinary cokriging and geophysical investigation on humidity data acquired in geotechnical studies. The Author describes concept of geostatistics, terminology of geostatistical modelling, spatial correlation functions, principles of solving cokriging systems, advantages of (co-)kriging in comparison with other interpolation methods, obstacles in this type of attempt. Cross validation and discussion of results was performed with an indication of prospect of applying similar procedures in various researches..
Zarei, Kobra; Atabati, Morteza; Ahmadi, Monire
2017-05-04
Bee algorithm (BA) is an optimization algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution which can be proposed to feature selection. In this paper, shuffling cross-validation-BA (CV-BA) was applied to select the best descriptors that could describe the retention factor (log k) in the biopartitioning micellar chromatography (BMC) of 79 heterogeneous pesticides. Six descriptors were obtained using BA and then the selected descriptors were applied for model development using multiple linear regression (MLR). The descriptor selection was also performed using stepwise, genetic algorithm and simulated annealing methods and MLR was applied to model development and then the results were compared with those obtained from shuffling CV-BA. The results showed that shuffling CV-BA can be applied as a powerful descriptor selection method. Support vector machine (SVM) was also applied for model development using six selected descriptors by BA. The obtained statistical results using SVM were better than those obtained using MLR, as the root mean square error (RMSE) and correlation coefficient (R) for whole data set (training and test), using shuffling CV-BA-MLR, were obtained as 0.1863 and 0.9426, respectively, while these amounts for the shuffling CV-BA-SVM method were obtained as 0.0704 and 0.9922, respectively.
Muscle synergies during bench press are reliable across days.
Kristiansen, Mathias; Samani, Afshin; Madeleine, Pascal; Hansen, Ernst Albin
2016-10-01
Muscle synergies have been investigated during different types of human movement using nonnegative matrix factorization. However, there are not any reports available on the reliability of the method. To evaluate between-day reliability, 21 subjects performed bench press, in two test sessions separated by approximately 7days. The movement consisted of 3 sets of 8 repetitions at 60% of the three repetition maximum in bench press. Muscle synergies were extracted from electromyography data of 13 muscles, using nonnegative matrix factorization. To evaluate between-day reliability, we performed a cross-correlation analysis and a cross-validation analysis, in which the synergy components extracted in the first test session were recomputed, using the fixed synergy components from the second test session. Two muscle synergies accounted for >90% of the total variance, and reflected the concentric and eccentric phase, respectively. The cross-correlation values were strong to very strong (r-values between 0.58 and 0.89), while the cross-validation values ranged from substantial to almost perfect (ICC3, 1 values between 0.70 and 0.95). The present findings revealed that the same general structure of the muscle synergies was present across days and the extraction of muscle synergies is thus deemed reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aghayev, Emin; Staub, Lukas; Dirnhofer, Richard; Ambrose, Tony; Jackowski, Christian; Yen, Kathrin; Bolliger, Stephan; Christe, Andreas; Roeder, Christoph; Aebi, Max; Thali, Michael J
2008-04-01
Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.
Measurement of stream channel habitat using sonar
Flug, Marshall; Seitz, Heather; Scott, John
1998-01-01
An efficient and low cost technique using a sonar system was evaluated for describing channel geometry and quantifying inundated area in a large river. The boat-mounted portable sonar equipment was used to record water depths and river width measurements for direct storage on a laptop computer. The field data collected from repeated traverses at a cross-section were evaluated to determine the precision of the system and field technique. Results from validation at two different sites showed average sample standard deviations (S.D.s) of 0.12 m for these complete cross-sections, with coefficient of variations of 10%. Validation using only the mid-channel river cross-section data yields an average sample S.D. of 0.05 m, with a coefficient of variation below 5%, at a stable and gauged river site using only measurements of water depths greater than 0.6 m. Accuracy of the sonar system was evaluated by comparison to traditionally surveyed transect data from a regularly gauged site. We observed an average mean squared deviation of 46.0 cm2, considering only that portion of the cross-section inundated by more than 0.6 m of water. Our procedure proved to be a reliable, accurate, safe, quick, and economic method to record river depths, discharges, bed conditions, and substratum composition necessary for stream habitat studies.
NASA Astrophysics Data System (ADS)
Nauleau, Pierre; Minonzio, Jean-Gabriel; Chekroun, Mathieu; Cassereau, Didier; Laugier, Pascal; Prada, Claire; Grimal, Quentin
2016-07-01
Our long-term goal is to develop an ultrasonic method to characterize the thickness, stiffness and porosity of the cortical shell of the femoral neck, which could enhance hip fracture risk prediction. To this purpose, we proposed to adapt a technique based on the measurement of guided waves. We previously evidenced the feasibility of measuring circumferential guided waves in a bone-mimicking phantom of a circular cross-section of even thickness. The goal of this study is to investigate the impact of the complex geometry of the femoral neck on the measurement of guided waves. Two phantoms of an elliptical cross-section and one phantom of a realistic cross-section were investigated. A 128-element array was used to record the inter-element response matrix of these waveguides. This experiment was simulated using a custom-made hybrid code. The response matrices were analyzed using a technique based on the physics of wave propagation. This method yields portions of dispersion curves of the waveguides which were compared to reference dispersion curves. For the elliptical phantoms, three portions of dispersion curves were determined with a good agreement between experiment, simulation and theory. The method was thus validated. The characteristic dimensions of the shell were found to influence the identification of the circumferential wave signals. The method was then applied to the signals backscattered by the superior half of constant thickness of the realistic phantom. A cut-off frequency and some portions of modes were measured, with a good agreement with the theoretical curves of a plate waveguide. We also observed that the method cannot be applied directly to the signals backscattered by the lower half of varying thicknesses of the phantom. The proposed approach could then be considered to evaluate the properties of the superior part of the femoral neck, which is known to be a clinically relevant site.
Gaspardo, B; Del Zotto, S; Torelli, E; Cividino, S R; Firrao, G; Della Riccia, G; Stefanon, B
2012-12-01
Fourier transform near infrared (FT-NIR) spectroscopy is an analytical procedure generally used to detect organic compounds in food. In this work the ability to predict fumonisin B(1)+B(2) contents in corn meal using an FT-NIR spectrophotometer, equipped with an integration sphere, was assessed. A total of 143 corn meal samples were collected in Friuli Venezia Giulia Region (Italy) and used to define a 15 principal components regression model, applying partial least square regression algorithm with full cross validation as internal validation. External validation was performed to 25 unknown samples. Coefficients of correlation, root mean square error and standard error of calibration were 0.964, 0.630 and 0.632, respectively and the external validation confirmed a fair potential of the model in predicting FB(1)+FB(2) concentration. Results suggest that FT-NIR analysis is a suitable method to detect FB(1)+FB(2) in corn meal and to discriminate safe meals from those contaminated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Determining blood and plasma volumes using bioelectrical response spectroscopy
NASA Technical Reports Server (NTRS)
Siconolfi, S. F.; Nusynowitz, M. L.; Suire, S. S.; Moore, A. D. Jr; Leig, J.
1996-01-01
We hypothesized that an electric field (inductance) produced by charged blood components passing through the many branches of arteries and veins could assess total blood volume (TBV) or plasma volume (PV). Individual (N = 29) electrical circuits (inductors, two resistors, and a capacitor) were determined from bioelectrical response spectroscopy (BERS) using a Hewlett Packard 4284A Precision LCR Meter. Inductance, capacitance, and resistance from the circuits of 19 subjects modeled TBV (sum of PV and computed red cell volume) and PV (based on 125I-albumin). Each model (N = 10, cross validation group) had good validity based on 1) mean differences (-2.3 to 1.5%) between the methods that were not significant and less than the propagated errors (+/- 5.2% for TBV and PV), 2) high correlations (r > 0.92) with low SEE (< 7.7%) between dilution and BERS assessments, and 3) Bland-Altman pairwise comparisons that indicated "clinical equivalency" between the methods. Given the limitation of this study (10 validity subjects), we concluded that BERS models accurately assessed TBV and PV. Further evaluations of the models' validities are needed before they are used in clinical or research settings.
2012-01-01
Background An integrative theoretical framework, developed for cross-disciplinary implementation and other behaviour change research, has been applied across a wide range of clinical situations. This study tests the validity of this framework. Methods Validity was investigated by behavioural experts sorting 112 unique theoretical constructs using closed and open sort tasks. The extent of replication was tested by Discriminant Content Validation and Fuzzy Cluster Analysis. Results There was good support for a refinement of the framework comprising 14 domains of theoretical constructs (average silhouette value 0.29): ‘Knowledge’, ‘Skills’, ‘Social/Professional Role and Identity’, ‘Beliefs about Capabilities’, ‘Optimism’, ‘Beliefs about Consequences’, ‘Reinforcement’, ‘Intentions’, ‘Goals’, ‘Memory, Attention and Decision Processes’, ‘Environmental Context and Resources’, ‘Social Influences’, ‘Emotions’, and ‘Behavioural Regulation’. Conclusions The refined Theoretical Domains Framework has a strengthened empirical base and provides a method for theoretically assessing implementation problems, as well as professional and other health-related behaviours as a basis for intervention development. PMID:22530986
Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂
Lee, Jong Soo; Cox, Dennis D.
2009-01-01
Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976
Lee, Shin-Young; Lee, Eunice E
2015-02-01
The purpose of this study was to report the instrument modification and validation processes to make existing health belief model scales culturally appropriate for Korean Americans (KAs) regarding colorectal cancer (CRC) screening utilization. Instrument translation, individual interviews using cognitive interviewing, and expert reviews were conducted during the instrument modification phase, and a pilot test and a cross-sectional survey were conducted during the instrument validation phase. Data analyses of the cross-sectional survey included internal consistency and construct validity using exploratory and confirmatory factor analysis. The main issues identified during the instrument modification phase were (a) cultural and linguistic translation issues and (b) newly developed items reflecting Korean cultural barriers. Cross-sectional survey analyses during the instrument validation phase revealed that all scales demonstrate good internal consistency reliability (Cronbach's alpha=.72~.88). Exploratory factor analysis showed that susceptibility and severity loaded on the same factor, which may indicate a threat variable. Items with low factor loadings in the confirmatory factor analysis may relate to (a) lack of knowledge about fecal occult blood testing and (b) multiple dimensions of the subscales. Methodological, sequential processes of instrument modification and validation, including translation, individual interviews, expert reviews, pilot testing and a cross-sectional survey, were provided in this study. The findings indicate that existing instruments need to be examined for CRC screening research involving KAs.
Martin, Lelia Gonçalves Rocha; Gaidzinski, Raquel Rapone
2014-01-01
Objective Construct and to validate an instrument for measuring the time spent by nursing staff in the interventions/activities in Outpatient Oncology and Hematology, interventions based on Nursing Interventions Classification (NIC), for key areas of Pediatric Oncology and Oncology Nursing. Methods Cross-sectional study divided into two steps: (1) construction of an instrument to measure the interventions/Nursing activities and (2) validation of this instrument. Results We selected 32 essential interventions from NIC for Pediatric Oncology and Oncology Nursing areas. The judges agreed with removing 13 and including 6 interventions in the instrument, beyond personal activity. Conclusion The choice of essential interventions from NIC is justified by the gain time on research. PMID:25295454
Empirical evaluation of data normalization methods for molecular classification.
Huang, Huei-Chung; Qin, Li-Xuan
2018-01-01
Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.
Cross-cultural validation of Lupus Impact Tracker in five European clinical practice settings.
Schneider, Matthias; Mosca, Marta; Pego-Reigosa, José-Maria; Gunnarsson, Iva; Maurel, Frédérique; Garofano, Anna; Perna, Alessandra; Porcasi, Rolando; Devilliers, Hervé
2017-05-01
The aim was to evaluate the cross-cultural validity of the Lupus Impact Tracker (LIT) in five European countries and to assess its acceptability and feasibility from the patient and physician perspectives. A prospective, observational, cross-sectional and multicentre validation study was conducted in clinical settings. Before the visit, patients completed LIT, Short Form 36 (SF-36) and care satisfaction questionnaires. During the visit, physicians assessed disease activity [Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI], organ damage [SLICC/ACR damage index (SDI)] and flare occurrence. Cross-cultural validity was assessed using the Differential Item Functioning method. Five hundred and sixty-nine SLE patients were included by 25 specialists; 91.7% were outpatients and 89.9% female, with mean age 43.5 (13.0) years. Disease profile was as follows: 18.3% experienced flares; mean SELENA-SLEDAI score 3.4 (4.5); mean SDI score 0.8 (1.4); and SF-36 mean physical and mental component summary scores: physical component summary 42.8 (10.8) and mental component summary 43.0 (12.3). Mean LIT score was 34.2 (22.3) (median: 32.5), indicating that lupus moderately impacted patients' daily life. A cultural Differential Item Functioning of negligible magnitude was detected across countries (pseudo- R 2 difference of 0.01-0.04). Differences were observed between LIT scores and Physician Global Assessment, SELENA-SLEDAI, SDI scores = 0 (P < 0.035) and absence of flares (P = 0.004). The LIT showed a strong association with SF-36 physical and social role functioning, vitality, bodily pain and mental health (P < 0.001). The LIT was well accepted by patients and physicians. It was reliable, with Cronbach α coefficients ranging from 0.89 to 0.92 among countries. The LIT is validated in the five participating European countries. The results show its reliability and cultural invariability across countries. They suggest that LIT can be used in routine clinical practice to evaluate and follow patient-reported outcomes in order to improve patient-physician interaction. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Genomic prediction of reproduction traits for Merino sheep.
Bolormaa, S; Brown, D J; Swan, A A; van der Werf, J H J; Hayes, B J; Daetwyler, H D
2017-06-01
Economically important reproduction traits in sheep, such as number of lambs weaned and litter size, are expressed only in females and later in life after most selection decisions are made, which makes them ideal candidates for genomic selection. Accurate genomic predictions would lead to greater genetic gain for these traits by enabling accurate selection of young rams with high genetic merit. The aim of this study was to design and evaluate the accuracy of a genomic prediction method for female reproduction in sheep using daughter trait deviations (DTD) for sires and ewe phenotypes (when individual ewes were genotyped) for three reproduction traits: number of lambs born (NLB), litter size (LSIZE) and number of lambs weaned. Genomic best linear unbiased prediction (GBLUP), BayesR and pedigree BLUP analyses of the three reproduction traits measured on 5340 sheep (4503 ewes and 837 sires) with real and imputed genotypes for 510 174 SNPs were performed. The prediction of breeding values using both sire and ewe trait records was validated in Merino sheep. Prediction accuracy was evaluated by across sire family and random cross-validations. Accuracies of genomic estimated breeding values (GEBVs) were assessed as the mean Pearson correlation adjusted by the accuracy of the input phenotypes. The addition of sire DTD into the prediction analysis resulted in higher accuracies compared with using only ewe records in genomic predictions or pedigree BLUP. Using GBLUP, the average accuracy based on the combined records (ewes and sire DTD) was 0.43 across traits, but the accuracies varied by trait and type of cross-validations. The accuracies of GEBVs from random cross-validations (range 0.17-0.61) were higher than were those from sire family cross-validations (range 0.00-0.51). The GEBV accuracies of 0.41-0.54 for NLB and LSIZE based on the combined records were amongst the highest in the study. Although BayesR was not significantly different from GBLUP in prediction accuracy, it identified several candidate genes which are known to be associated with NLB and LSIZE. The approach provides a way to make use of all data available in genomic prediction for traits that have limited recording. © 2017 Stichting International Foundation for Animal Genetics.
Validity of contents of a paediatric critical comfort scale using mixed methodology.
Bosch-Alcaraz, A; Jordan-Garcia, I; Alcolea-Monge, S; Fernández-Lorenzo, R; Carrasquer-Feixa, E; Ferrer-Orona, M; Falcó-Pegueroles, A
Critical illness in paediatric patients includes acute conditions in a healthy child as well as exacerbations of chronic disease, and therefore these situations must be clinically managed in Critical Care Units. The role of the paediatric nurse is to ensure the comfort of these critically ill patients. To that end, instruments are required that correctly assess critical comfort. To describe the process for validating the content of a paediatric critical comfort scale using mixed-method research. Initially, a cross-cultural adaptation of the Comfort Behavior Scale from English to Spanish using the translation and back-translation method was made. After that, its content was evaluated using mixed method research. This second step was divided into a quantitative stage in which an ad hoc questionnaire was used in order to assess each scale's item relevance and wording and a qualitative stage with two meetings with health professionals, patients and a family member following the Delphi Method recommendations. All scale items obtained a content validity index >0.80, except physical movement in its relevance, which obtained 0.76. Global content scale validity was 0.87 (high). During the qualitative stage, items from each of the scale domains were reformulated or eliminated in order to make the scale more comprehensible and applicable. The use of a mixed-method research methodology during the scale content validity phase allows the design of a richer and more assessment-sensitive instrument. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.
Cross-Cultural Validation of the Five-Factor Structure of Social Goals: A Filipino Investigation
ERIC Educational Resources Information Center
King, Ronnel B.; Watkins, David A.
2012-01-01
The aim of the present study was to test the cross-cultural validity of the five-factor structure of social goals that Dowson and McInerney proposed. Using both between-network and within-network approaches to construct validation, 1,147 Filipino high school students participated in the study. Confirmatory factor analysis indicated that the…
Simultaneous ocean cross-section and rainfall measurements from space with a nadir-pointing radar
NASA Technical Reports Server (NTRS)
Meneghini, R.; Atlas, D.
1984-01-01
A method to determine simultaneously the rainfall rate and the normalized backscattering cross section of the surface was evaluated. The method is based on the mirror reflected power, p sub m which corresponds to the portion of the incident power scattered from the surface to the precipitation, intercepted by the precipitation, and again returned to the surface where it is scattered a final time back to the antenna. Two approximations are obtained for P sub m depending on whether the field of view at the surface is either much greater or much less than the height of the reflection layer. Since the dependence of P sub m on the backscattering cross section of the surface differs in the two cases, two algorithms are given by which the path averaged rain rate and normalized cross section are deduced. The detectability of P sub m, the relative strength of other contributions to the return power arriving simultaneous with P sub m, and the validity of the approximations used in deriving P sub m are discussed.
NASA Astrophysics Data System (ADS)
Bak, S.; Smith, J. M.; Hesser, T.; Bryant, M. A.
2016-12-01
Near-coast wave models are generally validated with relatively small data sets that focus on analytical solutions, specialized experiments, or intense storms. Prior studies have compiled testbeds that include a few dozen experiments or storms to validate models (e.g., Ris et al. 2002), but few examples exist that allow for continued model evaluation in the nearshore and surf-zone in near-realtime. The limited nature of these validation sets is driven by a lack of high spatial and temporal resolution in-situ wave measurements and the difficulty in maintaining these instruments on the active profile over long periods of time. The US Army Corps of Engineers Field Research Facility (FRF) has initiated a Coastal Model Test-Bed (CMTB), which is an automated system that continually validates wave models (with morphological and circulation models to follow) utilizing the rich data set of oceanographic and bathymetric measurements collected at the FRF. The FRF's cross-shore wave array provides wave measurements along a cross-shore profile from 26 m of water depth to the shoreline, utilizing various instruments including wave-rider buoys, AWACs, aquadopps, pressure gauges, and a dune-mounted lidar (Brodie et al. 2015). This work uses the CMTB to evaluate the performance of a phase-averaged numerical wave model, STWAVE (Smith 2007, Massey et al. 2011) over the course of a year at the FRF in Duck, NC. Additionally, from the BathyDuck Experiment in October 2015, the CMTB was used to determine the impact of applying the depth boundary condition for the model from monthly acoustic bathymetric surveys in comparison to hourly estimates using a video-based inversion method (e.g., cBathy, Holman et al. 2013). The modeled wave parameters using both bathymetric boundary conditions are evaluated using the FRF's cross-shore wave array and two additional cross-shore arrays of wave measurements in 2 to 4 m water depth from BathyDuck in Fall, 2015.
Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation
Delorenzi, Mauro
2014-01-01
Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636
NASA Technical Reports Server (NTRS)
Bremner, P. G.; Blelloch, P. A.; Hutchings, A.; Shah, P.; Streett, C. L.; Larsen, C. E.
2011-01-01
This paper describes the measurement and analysis of surface fluctuating pressure level (FPL) data and vibration data from a plume impingement aero-acoustic and vibration (PIAAV) test to validate NASA s physics-based modeling methods for prediction of panel vibration in the near field of a hot supersonic rocket plume. For this test - reported more fully in a companion paper by Osterholt & Knox at 26th Aerospace Testing Seminar, 2011 - the flexible panel was located 2.4 nozzle diameters from the plume centerline and 4.3 nozzle diameters downstream from the nozzle exit. The FPL loading is analyzed in terms of its auto spectrum, its cross spectrum, its spatial correlation parameters and its statistical properties. The panel vibration data is used to estimate the in-situ damping under plume FPL loading conditions and to validate both finite element analysis (FEA) and statistical energy analysis (SEA) methods for prediction of panel response. An assessment is also made of the effects of non-linearity in the panel elasticity.
Optical sampling by laser cavity tuning.
Hochrein, Thomas; Wilk, Rafal; Mei, Michael; Holzwarth, Ronald; Krumbholz, Norman; Koch, Martin
2010-01-18
Most time-resolved optical experiments rely either on external mechanical delay lines or on two synchronized femtosecond lasers to achieve a defined temporal delay between two optical pulses. Here, we present a new method which does not require any external delay lines and uses only a single femtosecond laser. It is based on the cross-correlation of an optical pulse with a subsequent pulse from the same laser. Temporal delay between these two pulses is achieved by varying the repetition rate of the laser. We validate the new scheme by a comparison with a cross-correlation measurement carried out with a conventional mechanical delay line.
A diagnostic technique used to obtain cross range radiation centers from antenna patterns
NASA Technical Reports Server (NTRS)
Lee, T. H.; Burnside, W. D.
1988-01-01
A diagnostic technique to obtain cross range radiation centers based on antenna radiation patterns is presented. This method is similar to the synthetic aperture processing of scattered fields in the radar application. Coherent processing of the radiated fields is used to determine the various radiation centers associated with the far-zone pattern of an antenna for a given radiation direction. This technique can be used to identify an unexpected radiation center that creates an undesired effect in a pattern; on the other hand, it can improve a numerical simulation of the pattern by identifying other significant mechanisms. Cross range results for two 8' reflector antennas are presented to illustrate as well as validate that technique.
NASA Technical Reports Server (NTRS)
Chlouber, Dean; O'Neill, Pat; Pollock, Jim
1990-01-01
A technique of predicting an upper bound on the rate at which single-event upsets due to ionizing radiation occur in semiconducting memory cells is described. The upper bound on the upset rate, which depends on the high-energy particle environment in earth orbit and accelerator cross-section data, is given by the product of an upper-bound linear energy-transfer spectrum and the mean cross section of the memory cell. Plots of the spectrum are given for low-inclination and polar orbits. An alternative expression for the exact upset rate is also presented. Both methods rely only on experimentally obtained cross-section data and are valid for sensitive bit regions having arbitrary shape.
Tam, Teck Lip Dexter; Lin, Ting Ting; Chua, Ming Hui
2017-06-21
Here we utilized new diagnostic tools in time-dependent density functional theory to explain the trend of intersystem crossing in benzo(bis)-X-diazole based donor-acceptor-donor type molecules. These molecules display a wide range of fluorescence quantum yields and triplet yields, making them excellent candidates for testing the validity of these diagnostic tools. We believe that these tools are cost-effective and can be applied to structurally similar organic chromophores to predict/explain the trends of intersystem crossing, and thus fluorescence quantum yields and triplet yields without the use of complex and expensive multireference configuration interaction or multireference pertubation theory methods.
A multi-frequency iterative imaging method for discontinuous inverse medium problem
NASA Astrophysics Data System (ADS)
Zhang, Lei; Feng, Lixin
2018-06-01
The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.
How can you capture cultural dynamics?
Kashima, Yoshihisa
2014-01-01
Cross-cultural comparison is a critical method by which we can examine the interaction between culture and psychological processes. However, comparative methods tend to overlook cultural dynamics – the formation, maintenance, and transformation of cultures over time. The present article gives a brief overview of four different types of research designs that have been used to examine cultural dynamics in the literature: (1) cross-temporal methods that trace medium- to long-term changes in a culture; (2) cross-generational methods that explore medium-term implications of cultural transmission; (3) experimental simulation methods that investigate micro-level mechanisms of cultural dynamics; and (4) formal models and computer simulation methods often used to investigate long-term and macro-level implications of micro-level mechanisms. These methods differ in terms of level of analysis for which they are designed (micro vs. macro-level), scale of time for which they are typically used (short-, medium-, or long-term), and direction of inference (deductive vs. empirical method) that they imply. The paper describes examples of these methods, discuss their strengths and weaknesses, and point to their complementarity in inquiries about cultural change. Because cultural dynamics research is about meaning over time, issues deriving from interpretation of meaning and temporal distance between researchers and objects of inquiry can pose threats to the validity of the research and its findings. The methodological question about hermeneutic circle is recalled and further inquiries are encouraged. PMID:25309476
Measurement equivalence and differential item functioning in family psychology.
Bingenheimer, Jeffrey B; Raudenbush, Stephen W; Leventhal, Tama; Brooks-Gunn, Jeanne
2005-09-01
Several hypotheses in family psychology involve comparisons of sociocultural groups. Yet the potential for cross-cultural inequivalence in widely used psychological measurement instruments threatens the validity of inferences about group differences. Methods for dealing with these issues have been developed via the framework of item response theory. These methods deal with an important type of measurement inequivalence, called differential item functioning (DIF). The authors introduce DIF analytic methods, linking them to a well-established framework for conceptualizing cross-cultural measurement equivalence in psychology (C.H. Hui and H.C. Triandis, 1985). They illustrate the use of DIF methods using data from the Project on Human Development in Chicago Neighborhoods (PHDCN). Focusing on the Caregiver Warmth and Environmental Organization scales from the PHDCN's adaptation of the Home Observation for Measurement of the Environment Inventory, the authors obtain results that exemplify the range of outcomes that may result when these methods are applied to psychological measurement instruments. (c) 2005 APA, all rights reserved
Saffari, Mohsen; Naderi, Maryam K; Piper, Crystal N; Koenig, Harold G
There is no valid and well-established tool to measure fatigue in people with chronic hepatitis B. The aim of this study was to translate the Multidimensional Fatigue Inventory (MFI) into Persian and examine its reliability and validity in Iranian people with chronic hepatitis B. The demographic questionnaire and MFI, as well as Chronic Liver Disease Questionnaire and EuroQol-5D (to assess criterion validity), were administered in face-to-face interviews with 297 participants. A forward-backward translation method was used to develop a culturally adapted Persian version of the questionnaire. Cronbach's α was used to assess the internal reliability of the scale. Pearson correlation was used to assess criterion validity, and known-group method was used along with factor analysis to establish construct validity. Cronbach's α for the total scale was 0.89. Convergent and discriminant validities were also established. Correlations between the MFI and the health-related quality of life scales were significant (p < .01). The scale differentiated between subgroups of persons with the hepatitis B infection in terms of age, gender, employment, education, disease duration, and stage of disease. Factor analysis indicated a four-factor solution for the scale that explained 60% of the variance. The MFI is a valid and reliable instrument to identify fatigue in Iranians with hepatitis B.
Processing methods for photoacoustic Doppler flowmetry with a clinical ultrasound scanner
NASA Astrophysics Data System (ADS)
Bücking, Thore M.; van den Berg, Pim J.; Balabani, Stavroula; Steenbergen, Wiendelt; Beard, Paul C.; Brunker, Joanna
2018-02-01
Photoacoustic flowmetry (PAF) based on time-domain cross correlation of photoacoustic signals is a promising technique for deep tissue measurement of blood flow velocity. Signal processing has previously been developed for single element transducers. Here, the processing methods for acoustic resolution PAF using a clinical ultrasound transducer array are developed and validated using a 64-element transducer array with a -6 dB detection band of 11 to 17 MHz. Measurements were performed on a flow phantom consisting of a tube (580 μm inner diameter) perfused with human blood flowing at physiological speeds ranging from 3 to 25 mm / s. The processing pipeline comprised: image reconstruction, filtering, displacement detection, and masking. High-pass filtering and background subtraction were found to be key preprocessing steps to enable accurate flow velocity estimates, which were calculated using a cross-correlation based method. In addition, the regions of interest in the calculated velocity maps were defined using a masking approach based on the amplitude of the cross-correlation functions. These developments enabled blood flow measurements using a transducer array, bringing PAF one step closer to clinical applicability.
Fluorescence imaging of tryptophan and collagen cross-links to evaluate wound closure ex vivo
NASA Astrophysics Data System (ADS)
Wang, Ying; Ortega-Martinez, Antonio; Farinelli, Bill; Anderson, R. R.; Franco, Walfre
2016-02-01
Wound size is a key parameter in monitoring healing. Current methods to measure wound size are often subjective, time-consuming and marginally invasive. Recently, we developed a non-invasive, non-contact, fast and simple but robust fluorescence imaging (u-FEI) method to monitor the healing of skin wounds. This method exploits the fluorescence of native molecules to tissue as functional and structural markers. The objective of the present study is to demonstrate the feasibility of using variations in the fluorescence intensity of tryptophan and cross-links of collagen to evaluate proliferation of keratinocyte cells and quantitate size of wound during healing, respectively. Circular dermal wounds were created in ex vivo human skin and cultured in different media. Two serial fluorescence images of tryptophan and collagen cross-links were acquired every two days. Histology and immunohistology were used to validate correlation between fluorescence and epithelialization. Images of collagen cross-links show fluorescence of the exposed dermis and, hence, are a measure of wound area. Images of tryptophan show higher fluorescence intensity of proliferating keratinocytes forming new epithelium, as compared to surrounding keratinocytes not involved in epithelialization. These images are complementary since collagen cross-links report on structure while tryptophan reports on function. HE and immunohistology show that tryptophan fluorescence correlates with newly formed epidermis. We have established a fluorescence imaging method for studying epithelialization processes during wound healing in a skin organ culture model, our approach has the potential to provide a non-invasive, non-contact, quick, objective and direct method for quantitative measurements in wound healing in vivo.
Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław
2015-01-01
The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries—conjoint analysis—which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods. PMID:25674069
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
Harun, Norlida; Anderson, Robert A; Miller, Eleanor I
2009-01-01
An ELISA and a liquid chromatography-tandem mass spectrometry (LC-MS-MS) confirmation method were developed and validated for the identification and quantitation of ketamine and its major metabolite norketamine in urine samples. The Neogen ketamine microplate ELISA was optimized with respect to sample and enzyme conjugate volumes and the sample preincubation time before addition of the enzyme conjugate. The ELISA kit was validated to include an assessment of the dose-response curve, intra- and interday precision, limit of detection (LOD), and cross-reactivity. The sensitivity and specificity were calculated by comparison to the results from the validated LC-MS-MS confirmation method. An LC-MS-MS method was developed and validated with respect to LOD, lower limit of quantitation (LLOQ), linearity, recovery, intra- and interday precision, and matrix effects. The ELISA dose-response curve was a typical S-shaped binding curve, with a linear portion of the graph observed between 25 and 500 ng/mL for ketamine. The cross-reactivity of 200 ng/mL norketamine to ketamine was 2.1%, and no cross-reactivity was detected with 13 common drugs tested at 10,000 ng/mL. The ELISA LOD was calculated to be 5 ng/mL. Both intra- (n = 10) and interday (n = 50) precisions were below 5.0% at 25 ng/mL. The LOD for ketamine and norketamine was calculated statistically to be 0.6 ng/mL. The LLOQ values were also calculated statistically and were 1.9 ng/mL and 2.1 ng/mL for ketamine and norketamine, respectively. The test linearity was 0-1200 ng/mL with correlation coefficient (R(2)) > 0.99 for both analytes. Recoveries at 50, 500, and 1000 ng/mL range from 97.9% to 113.3%. Intra- (n = 5) and interday (n = 25) precisions between extracts for ketamine and norketamine were excellent (< 10%). Matrix effects analysis showed an average ion suppression of 5.7% for ketamine and an average ion enhancement of 13.0% for norketamine for urine samples collected from six individuals. A comparison of ELISA and LC-MS-MS results demonstrated a sensitivity, specificity, and efficiency of 100%. These results indicated that a cutoff value of 25 ng/mL ketamine in the ELISA screen is particularly suitable and reliable for urine testing in a forensic toxicology setting. Furthermore, both ketamine and norketamine were detected in all 34 urine samples collected from individuals socializing in pubs by the Royal Malaysian Police. Ketamine concentrations detected by LC-MS-MS ranged from 22 to 31,670 ng/mL, and norketamine concentrations ranged from 25 to 10,990 ng/mL. The concentrations of ketamine and norketamine detected in the samples are most ikely indicative of ketamine abuse.
Assessing cross-cultural validity of scales: a methodological review and illustrative example.
Beckstead, Jason W; Yang, Chiu-Yueh; Lengacher, Cecile A
2008-01-01
In this article, we assessed the cross-cultural validity of the Women's Role Strain Inventory (WRSI), a multi-item instrument that assesses the degree of strain experienced by women who juggle the roles of working professional, student, wife and mother. Cross-cultural validity is evinced by demonstrating the measurement invariance of the WRSI. Measurement invariance is the extent to which items of multi-item scales function in the same way across different samples of respondents. We assessed measurement invariance by comparing a sample of working women in Taiwan with a similar sample from the United States. Structural equation models (SEMs) were employed to determine the invariance of the WRSI and to estimate the unique validity variance of its items. This article also provides nurse-researchers with the necessary underlying measurement theory and illustrates how SEMs may be applied to assess cross-cultural validity of instruments used in nursing research. Overall performance of the WRSI was acceptable but our analysis showed that some items did not display invariance properties across samples. Item analysis is presented and recommendations for improving the instrument are discussed.
Sattler, Tine; Sekulic, Damir; Spasic, Miodrag; Osmankac, Nedzad; Vicente João, Paulo; Dervisevic, Edvin; Hadzic, Vedran
2016-01-01
Previous investigations noted potential importance of isokinetic strength in rapid muscular performances, such as jumping. This study aimed to identify the influence of isokinetic-knee-strength on specific jumping performance in volleyball. The secondary aim of the study was to evaluate reliability and validity of the two volleyball-specific jumping tests. The sample comprised 67 female (21.96±3.79 years; 68.26±8.52 kg; 174.43±6.85 cm) and 99 male (23.62±5.27 years; 84.83±10.37 kg; 189.01±7.21 cm) high- volleyball players who competed in 1st and 2nd National Division. Subjects were randomly divided into validation (N.=55 and 33 for males and females, respectively) and cross-validation subsamples (N.=54 and 34 for males and females, respectively). Set of predictors included isokinetic tests, to evaluate the eccentric and concentric strength capacities of the knee extensors, and flexors for dominant and non-dominant leg. The main outcome measure for the isokinetic testing was peak torque (PT) which was later normalized for body mass and expressed as PT/Kg. Block-jump and spike-jump performances were measured over three trials, and observed as criteria. Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between and t-test differences between observed and predicted scores; and Bland Altman graphics. Jumping tests were found to be reliable (spike jump: ICC of 0.79 and 0.86; block-jump: ICC of 0.86 and 0.90; for males and females, respectively), and their validity was confirmed by significant t-test differences between 1st vs. 2nd division players. Isokinetic variables were found to be significant predictors of jumping performance in females, but not among males. In females, the isokinetic-knee measures were shown to be stronger and more valid predictors of the block-jump (42% and 64% of the explained variance for validation and cross-validation subsample, respectively) than that of the spike-jump (39% and 34% of the explained variance for validation and cross-validation subsample, respectively). Differences between prediction models calculated for males and females are mostly explained by gender-specific biomechanics of jumping. Study defined importance of knee-isokinetic-strength in volleyball jumping performance in female athletes. Further studies should evaluate association between ankle-isokinetic-strength and volleyball-specific jumping performances. Results reinforce the need for the cross-validation of the prediction-models in sport and exercise sciences.
The Arthroscopic Surgical Skill Evaluation Tool (ASSET)
Koehler, Ryan J.; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J.; Nicandri, Gregg T.
2014-01-01
Background Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. Hypothesis The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability, when used to assess the technical ability of surgeons performing diagnostic knee arthroscopy on cadaveric specimens. Study Design Cross-sectional study; Level of evidence, 3 Methods Content validity was determined by a group of seven experts using a Delphi process. Intra-articular performance of a right and left diagnostic knee arthroscopy was recorded for twenty-eight residents and two sports medicine fellowship trained attending surgeons. Subject performance was assessed by two blinded raters using the ASSET. Concurrent criterion-oriented validity, inter-rater reliability, and test-retest reliability were evaluated. Results Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in total ASSET score (p<0.05) between novice, intermediate, and advanced experience groups were identified. Inter-rater reliability: The ASSET scores assigned by each rater were strongly correlated (r=0.91, p <0.01) and the intra-class correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: there was a significant correlation between ASSET scores for both procedures attempted by each individual (r = 0.79, p<0.01). Conclusion The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopy in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live OR and other simulated environments. PMID:23548808
Tuğay, Baki Umut; Tuğay, Nazan; Güney, Hande; Kınıklı, Gizem İrem; Yüksel, İnci; Atilla, Bülent
2016-01-01
The Oxford Knee Score (OKS) is a valid, short, self-administered, and site- specific outcome measure specifically developed for patients with knee arthroplasty. This study aimed to cross-culturally adapt and validate the OKS to be used in Turkish-speaking patients with osteoarthritis of the knee. The OKS was translated and culturally adapted according to the guidelines in the literature. Ninety-one patients (mean age: 55.89±7.85 years) with knee osteoarthritis participated in the study. Patients completed the Turkish version of the Oxford Knee Score (OKS-TR), Short-Form 36 Health Survey (SF-36), and Western Ontario and McMaster Universities Index (WOMAC) questionnaires. Internal consistency was tested using Cronbach's α coefficient. Patients completed the OKS-TR questionnaire twice in 7 days to determine the reproducibility. Correlation between the total results of both tests was determined by Spearman's correlation coefficient and intraclass correlation coefficients (ICC). Validity was assessed by calculating Spearman's correlation coefficient between the OKS, WOMAC, and SF-36 scores. Floor and ceiling effects were analyzed. Internal consistency was high (Cronbach's α: 0.90). The reproducibility tested by 2 different methods showed no significant difference (p>0.05). The construct validity analyses showed a significant correlation between the OKS and the other scores (p<0.05). There was no floor or ceiling effect in total OKS score. The OKS-TR is a reliable and valid measure for the self-assessment of pain and function in Turkish-speaking patients with osteoarthritis of the knee.
Campos, G S; Reimann, F A; Cardoso, L L; Ferreira, C E R; Junqueira, V S; Schmidt, P I; Braccini Neto, J; Yokoo, M J I; Sollero, B P; Boligon, A A; Cardoso, F F
2018-05-07
The objective of the present study was to evaluate the accuracy and bias of direct and blended genomic predictions using different methods and cross-validation techniques for growth traits (weight and weight gains) and visual scores (conformation, precocity, muscling and size) obtained at weaning and at yearling in Hereford and Braford breeds. Phenotypic data contained 126,290 animals belonging to the Delta G Connection genetic improvement program, and a set of 3,545 animals genotyped with the 50K chip and 131 sires with the 777K. After quality control, 41,045 markers remained for all animals. An animal model was used to estimate (co)variances components and to predict breeding values, which were later used to calculate the deregressed estimated breeding values (DEBV). Animals with genotype and phenotype for the traits studied were divided into four or five groups by random and k-means clustering cross-validation strategies. The values of accuracy of the direct genomic values (DGV) were moderate to high magnitude for at weaning and at yearling traits, ranging from 0.19 to 0.45 for the k-means and 0.23 to 0.78 for random clustering among all traits. The greatest gain in relation to the pedigree BLUP (PBLUP) was 9.5% with the BayesB method with both the k-means and the random clustering. Blended genomic value accuracies ranged from 0.19 to 0.56 for k-means and from 0.21 to 0.82 for random clustering. The analyzes using the historical pedigree and phenotypes contributed additional information to calculate the GEBV and in general, the largest gains were for the single-step (ssGBLUP) method in bivariate analyses with a mean increase of 43.00% among all traits measured at weaning and of 46.27% for those evaluated at yearling. The accuracy values for the marker effects estimation methods were lower for k-means clustering, indicating that the training set relationship to the selection candidates is a major factor affecting accuracy of genomic predictions. The gains in accuracy obtained with genomic blending methods, mainly ssGBLUP in bivariate analyses, indicate that genomic predictions should be used as a tool to improve genetic gains in relation to the traditional PBLUP selection.
An approach to define semantics for BPM systems interoperability
NASA Astrophysics Data System (ADS)
Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María
2015-04-01
This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.
ERIC Educational Resources Information Center
Edge, Daniel; Oyefeso, Adenekan; Evans, Carys; Evans, Amber
2016-01-01
Objective: To determine the psychometric properties of the Montreal Cognitive Assessment (MoCA) in patients with a learning disability and examine it's utility for conducting mental capacity assessment. Method: This study was a cross-sectional, instrument validation study in an inpatient hospital setting, located in the East of England. The sample…
ERIC Educational Resources Information Center
Micceri, Theodore; Brigman, Leellen; Spatig, Robert
2009-01-01
An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…
ERIC Educational Resources Information Center
Breuer, Christoph; Wicker, Pamela
2009-01-01
According to cross-sectional studies in sport science literature, decreasing sports activity with increasing age is generally assumed. In this paper, the validity of this assumption is checked by applying more effective methods of analysis, such as longitudinal and cohort sequence analyses. With the help of 20 years' worth of data records from the…
Three-Jet Production in Electron-Positron Collisions at Next-to-Next-to-Leading Order Accuracy
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Trócsányi, Zoltán
2016-10-01
We introduce a completely local subtraction method for fully differential predictions at next-to-next-to-leading order (NNLO) accuracy for jet cross sections and use it to compute event shapes in three-jet production in electron-positron collisions. We validate our method on two event shapes, thrust and C parameter, which are already known in the literature at NNLO accuracy and compute for the first time oblateness and the energy-energy correlation at the same accuracy.
Three-Jet Production in Electron-Positron Collisions at Next-to-Next-to-Leading Order Accuracy.
Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Trócsányi, Zoltán
2016-10-07
We introduce a completely local subtraction method for fully differential predictions at next-to-next-to-leading order (NNLO) accuracy for jet cross sections and use it to compute event shapes in three-jet production in electron-positron collisions. We validate our method on two event shapes, thrust and C parameter, which are already known in the literature at NNLO accuracy and compute for the first time oblateness and the energy-energy correlation at the same accuracy.
Model selection for the North American Breeding Bird Survey: A comparison of methods
Link, William; Sauer, John; Niven, Daniel
2017-01-01
The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Modeling spanwise nonuniformity in the cross-sectional analysis of composite beams
NASA Astrophysics Data System (ADS)
Ho, Jimmy Cheng-Chung
Spanwise nonuniformity effects are modeled in the cross-sectional analysis of beam theory. This modeling adheres to an established numerical framework on cross-sectional analysis of uniform beams with arbitrary cross-sections. This framework is based on two concepts: decomposition of the rotation tensor and the variational-asymptotic method. Allowance of arbitrary materials and geometries in the cross-section is from discretization of the warping field by finite elements. By this approach, dimensional reduction from three-dimensional elasticity is performed rigorously and the sectional strain energy is derived to be asymptotically-correct. Elastic stiffness matrices are derived for inputs into the global beam analysis. Recovery relations for the displacement, stress, and strain fields are also derived with care to be consistent with the energy. Spanwise nonuniformity effects appear in the form of pointwise and sectionwise derivatives, which are approximated by finite differences. The formulation also accounts for the effects of spanwise variations in initial twist and/or curvature. A linearly tapered isotropic strip is analyzed to demonstrate spanwise nonuniformity effects on the cross-sectional analysis. The analysis is performed analytically by the variational-asymptotic method. Results from beam theory are validated against solutions from plane stress elasticity. These results demonstrate that spanwise nonuniformity effects become significant as the rate at which the cross-sections vary increases. The modeling of transverse shear modes of deformation is accomplished by transforming the strain energy into generalized Timoshenko form. Approximations in this transformation procedure from previous works, when applied to uniform beams, are identified. The approximations are not used in the present work so as to retain more accuracy. Comparison of present results with those previously published shows that these approximations sometimes change the results measurably and thus are inappropriate. Static and dynamic results, from the global beam analysis, are calculated to show the differences between using stiffness constants from previous works and the present work. As a form of validation of the transformation procedure, calculations from the global beam analysis of initially twisted isotropic beams from using curvilinear coordinate axes featuring twist are shown to be equivalent to calculations using Cartesian coordinates.
ERIC Educational Resources Information Center
Fromm, Germán; Hallinger, Philip; Volante, Paulo; Wang, Wen Chung
2017-01-01
The purposes of this study were to report on a systematic approach to validating a Spanish version of the Principal Instructional Management Rating Scale and then to apply the scale in a cross-national comparison of principal instructional leadership. The study yielded a validated Spanish language version of the PIMRS Teacher Form and offers a…
Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire
Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra
2018-01-01
Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain. PMID:29843496
Baker, William L; Williams, Mark A
2018-03-01
An understanding of how historical fire and structure in dry forests (ponderosa pine, dry mixed conifer) varied across the western United States remains incomplete. Yet, fire strongly affects ecosystem services, and forest restoration programs are underway. We used General Land Office survey reconstructions from the late 1800s across 11 landscapes covering ~1.9 million ha in four states to analyze spatial variation in fire regimes and forest structure. We first synthesized the state of validation of our methods using 20 modern validations, 53 historical cross-validations, and corroborating evidence. These show our method creates accurate reconstructions with low errors. One independent modern test reported high error, but did not replicate our method and made many calculation errors. Using reconstructed parameters of historical fire regimes and forest structure from our validated methods, forests were found to be non-uniform across the 11 landscapes, but grouped together in three geographical areas. Each had a mixture of fire severities, but dominated by low-severity fire and low median tree density in Arizona, mixed-severity fire and intermediate to high median tree density in Oregon-California, and high-severity fire and intermediate median tree density in Colorado. Programs to restore fire and forest structure could benefit from regional frameworks, rather than one size fits all. © 2018 by the Ecological Society of America.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Electron Impact Inner-shell Ionization including relativistic corrections.
NASA Astrophysics Data System (ADS)
Saha, Bidhan C.; Alfaz Uddin, M.; Basak, Arun K.
2007-04-01
We report a simple method to evaluate the electron impact inner-shell ionization cross sections at ultra high energy regime; there still remains a sparse cross sections due to lack of reliable method. To extend the validity domains of the siBED model [1] in terms of targets and incident energies in this work we modified the RQIBED model [2], and denoted it as MUIBED. It is examined for the description of the experimental EIICS data of various target atoms up to E=250MeV. Details will be presented at the meeting. [1] W. M. Huo, Phys. Rev A 64, 042719 (2001). [2] M. A. Uddin, A. K. F. Haque, M. S. Mahbub, K. R. Karim, A. K. Basak and B. C. Saha, Phys. Rev. A 71, 032715 (2005).
Gannotti, Mary E; Handwerker, W Penn
2002-12-01
Validating the cultural context of health is important for obtaining accurate and useful information from standardized measures of child health adapted for cross-cultural applications. This paper describes the application of ethnographic triangulation for cultural validation of a measure of childhood disability, the Pediatric Evaluation of Disability Inventory (PEDI) for use with children living in Puerto Rico. The key concepts include macro-level forces such as geography, demography, and economics, specific activities children performed and their key social interactions, beliefs, attitudes, emotions, and patterns of behavior surrounding independence in children and childhood disability, as well as the definition of childhood disability. Methods utilize principal components analysis to establish the validity of cultural concepts and multiple regression analysis to identify intracultural variation. Findings suggest culturally specific modifications to the PEDI, provide contextual information for informed interpretation of test scores, and point to the need to re-standardize normative values for use with Puerto Rican children. Without this type of information, Puerto Rican children may appear more disabled than expected for their level of impairment or not to be making improvements in functional status. The methods also allow for cultural boundaries to be quantitatively established, rather than presupposed. Copyright 2002 Elsevier Science Ltd.
Kadamne, Jeta V; Jain, Vishal P; Saleh, Mohammed; Proctor, Andrew
2009-11-25
Conjugated linoleic acid (CLA) isomers in oils are currently measured as fatty acid methyl esters by a gas chromatography-flame ionization detector (GC-FID) technique, which requires approximately 2 h to complete the analysis. Hence, we aim to develop a method to rapidly determine CLA isomers in CLA-rich soy oil. Soy oil with 0.38-25.11% total CLA was obtained by photo-isomerization of 96 soy oil samples for 24 h. A sample was withdrawn at 30 min intervals with repeated processing using a second batch of oil. Six replicates of GC-FID fatty acid analysis were conducted for each oil sample. The oil samples were scanned using attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), and the spectrum was collected. Calibration models were developed using partial least-squares (PLS-1) regression using Unscrambler software. Models were validated using a full cross-validation technique and tested using samples that were not included in the calibration sample set. Measured and predicted total CLA, trans,trans CLA isomers, total mono trans CLA isomers, trans-10,cis-12 CLA, trans-9,cis-11 CLA and cis-10,trans-12 CLA, and cis-9,trans-11 CLA had cross-validated coefficients of determinations (R2v) of 0.97, 0.98, 0.97, 0.98, 0.97, and 0.99 and corresponding root-mean-square error of validation (RMSEV) of 1.14, 0.69, 0.27, 0.07, 0.14, and 0.07% CLA, respectively. The ATR-FTIR technique is a rapid and less expensive method for determining CLA isomers in linoleic acid photo-isomerized soy oil than GC-FID.
Bresick, Graham; Sayed, Abdul-Rauf; le Grange, Cynthia; Bhagwan, Susheela; Manga, Nayna
2015-06-19
Measuring primary care is important for health sector reform. The Primary Care Assessment Tool (PCAT) measures performance of elements essential for cost-effective care. Following minor adaptations prior to use in Cape Town in 2011, a few findings indicated a need to improve the content and cross-cultural validity for wider use in South Africa (SA). This study aimed to validate the United States of America-developed PCAT before being used in a baseline measure of primary care performance prior to major reform. Public sector primary care clinics, users, practitioners and managers in urban and rural districts in the Western Cape Province. Face value evaluation of item phrasing and a combination of Delphi and Nominal Group Technique (NGT) methods with an expert panel and user focus group were used to obtain consensus on content relevant to SA. Original and new domains and items with > = 70% agreement were included in the South African version--ZA PCAT. All original PCAT domains achieved consensus on inclusion. One new domain, the primary healthcare (PHC) team, was added. Three of 95 original items achieved < 70% agreement, that is consensus to exclude as not relevant to SA; 19 new items were added. A few items needed minor rephrasing with local healthcare jargon. The demographic section was adapted to local socio-economic conditions. The adult PCAT was translated into isiXhosa and Afrikaans. The PCAT is a valid measure of primary care performance in SA. The PHC team domain is an important addition, given its emphasis in PHC re-engineering. A combination of Delphi and NGT methods succeeded in obtaining consensus on a multi-domain, multi-item instrument in a resource-constrained environment.
Validity of Hansen-Roach cross sections in low-enriched uranium systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, R.D.; O'Dell, R.D.
Within the nuclear criticality safety community, the Hansen-Roach 16 group cross section set has been the standard'' for use in k{sub eff} calculations over the past 30 years. Yet even with its widespread acceptance, there are still questions about its validity and adequacy, about the proper procedure for calculating the potential scattering cross section, {sigma}{sub p}, for uranium and plutonium, and about the concept of resonance self shielding and its impact on cross sections. This paper attempts to address these questions. It provides a brief background on the Hansen-Roach cross sections. Next is presented a review of resonances in crossmore » sections, self shielding of these resonances, and the use of {sigma}{sub p} to characterize resonance self shielding. Three prescriptions for calculating {sigma}{sub p} are given. Finally, results of several calculations of k{sub eff} on low-enriched uranium systems are provided to confirm the validity of the Hansen-Roach cross sections when applied to such systems.« less
Mendis, M Dilani; Wilson, Stephen J; Stanton, Warren; Hides, Julie A
2010-09-01
Clinical measurement, criterion standard. To investigate the validity of real-time ultrasound imaging (USI) to measure individual anterior hip muscle cross-sectional area. The hip flexor muscles are important for hip joint function and could be affected by joint pathology or injury. Objectively documenting individual anterior hip muscle size can be useful in identifying muscle size asymmetry and monitoring treatment efficacy for patients with hip problems. USI offers a novel method of measuring individual muscle size in the clinic, but its validity in measuring the anterior hip muscles has not been investigated. Nine healthy participants (5 males, 4 females) underwent imaging of their iliopsoas, sartorius, and rectus femoris muscles with USI and magnetic resonance imaging. Bilateral muscle cross-sectional areas were measured on images from both modalities. There was no significant difference (P>.05) in mean cross-sectional area measurements from USI and magnetic resonance imaging for each muscle. Agreement between measurements was high for the iliopsoas (left: intraclass correlation coefficient [ICC3,1] = 0.86; 95% confidence interval [CI]: 0.51, 0.97; right: ICC3,1 = 0.88; 95% CI: 0.57, 0.97), sartorius (left: ICC3,1 = 0.82; 95% CI: 0.41, 0.96; right: ICC3,1 = 0.81; 95% CI: 0.39, 0.95), and rectus femoris (left: ICC3,1 = 0.85; 95% CI: 0.49, 0.96; right: ICC3,1 = 0.89; 95% CI: 0.61, 0.97). Reliability of measuring each muscle with USI was high between 2 trials (ICCs3,1 = 0.84 to 0.94). USI is a valid measure of iliopsoas, sartorius, and rectus femoris muscle size in healthy people, as long as a strict measurement protocol is followed.
Cross-cultural adaptation and validation of the Behcet’s Disease Current Activity Form in Korea
Choi, Hyo Jin; Seo, Mi Ryoung; Ryu, Hee Jung; Baek, Han Joo
2015-01-01
Background/Aims: This study was undertaken to perform a cross-cultural adaptation of the Behcet’s Disease Current Activity Form (BDCAF, version 2006) questionnaire to the Korean language and to evaluate its reliability and validity in a population of Korean patients with Behcet’s disease (BD). Methods: A cross-cultural study was conducted among patients with BD who attended our rheumatology clinic between November 2012 and March 2013. There were 11 males and 35 females in the group. The mean age of the participants was 48.5 years and the mean disease duration was 6.4 years. The first BDCAF questionnaire was completed on arrival and the second assessment was performed 20 minutes later by a different physician. The test-retest reliability was analyzed by computing κ statistics. Kappa scores of > 0.6 indicated a good agreement. To assess the validity, we compared the total BDCAF score with the patient’s/clinician’s perception of disease activity and the Korean version of the Behcet’s Disease Quality of Life (BDQOL). Results: For the test-retest reliability, good agreements were achieved on items such as headache, oral/genital ulceration, erythema, skin pustules, arthralgia, nausea/vomiting/abdominal pain, and diarrhea with altered/frank blood per rectum. Moderate agreement was observed for eye and nervous system involvement. We achieved a fair agreement for arthritis and major vessel involvement. Significant correlations were obtained between the total BDCAF score with the BDQOL and the patient’s/clinician’s perception of disease activity p < 0.05). Conclusions: The Korean version of the BDCAF is a reliable and valid instrument for measuring current disease activity in Korean BD patients. PMID:26354066
Estimation of Particulate Mass and Manganese Exposure Levels among Welders
Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.
2011-01-01
Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928
Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R.
2015-01-01
PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building. PMID:26438655
Kievit, Rogier F; Hoes, Arno W; Bots, Michiel L; van Riet, Evelien ES; van Mourik, Yvonne; Bertens, Loes CM; Boonman-de Winter, Leandra JM; den Ruijter, Hester M; Rutten, Frans H
2018-01-01
Background Prevalence of undetected heart failure in older individuals is high in the community, with patients being at increased risk of morbidity and mortality due to the chronic and progressive nature of this complex syndrome. An essential, yet currently unavailable, strategy to pre-select candidates eligible for echocardiography to confirm or exclude heart failure would identify patients earlier, enable targeted interventions and prevent disease progression. The aim of this study was therefore to develop and validate such a model that can be implemented clinically. Methods and results Individual patient data from four primary care screening studies were analysed. From 1941 participants >60 years old, 462 were diagnosed with heart failure, according to criteria of the European Society of Cardiology heart failure guidelines. Prediction models were developed in each cohort followed by cross-validation, omitting each of the four cohorts in turn. The model consisted of five independent predictors; age, history of ischaemic heart disease, exercise-related shortness of breath, body mass index and a laterally displaced/broadened apex beat, with no significant interaction with sex. The c-statistic ranged from 0.70 (95% confidence interval (CI) 0.64–0.76) to 0.82 (95% CI 0.78–0.87) at cross-validation and the calibration was reasonable with Observed/Expected ratios ranging from 0.86 to 1.15. The clinical model improved with the addition of N-terminal pro B-type natriuretic peptide with the c-statistic increasing from 0.76 (95% CI 0.70–0.81) to 0.89 (95% CI 0.86–0.92) at cross-validation. Conclusion Easily obtainable patient characteristics can select older men and women from the community who are candidates for echocardiography to confirm or refute heart failure. PMID:29327942
Kievit, Rogier F; Gohar, Aisha; Hoes, Arno W; Bots, Michiel L; van Riet, Evelien Es; van Mourik, Yvonne; Bertens, Loes Cm; Boonman-de Winter, Leandra Jm; den Ruijter, Hester M; Rutten, Frans H
2018-03-01
Background Prevalence of undetected heart failure in older individuals is high in the community, with patients being at increased risk of morbidity and mortality due to the chronic and progressive nature of this complex syndrome. An essential, yet currently unavailable, strategy to pre-select candidates eligible for echocardiography to confirm or exclude heart failure would identify patients earlier, enable targeted interventions and prevent disease progression. The aim of this study was therefore to develop and validate such a model that can be implemented clinically. Methods and results Individual patient data from four primary care screening studies were analysed. From 1941 participants >60 years old, 462 were diagnosed with heart failure, according to criteria of the European Society of Cardiology heart failure guidelines. Prediction models were developed in each cohort followed by cross-validation, omitting each of the four cohorts in turn. The model consisted of five independent predictors; age, history of ischaemic heart disease, exercise-related shortness of breath, body mass index and a laterally displaced/broadened apex beat, with no significant interaction with sex. The c-statistic ranged from 0.70 (95% confidence interval (CI) 0.64-0.76) to 0.82 (95% CI 0.78-0.87) at cross-validation and the calibration was reasonable with Observed/Expected ratios ranging from 0.86 to 1.15. The clinical model improved with the addition of N-terminal pro B-type natriuretic peptide with the c-statistic increasing from 0.76 (95% CI 0.70-0.81) to 0.89 (95% CI 0.86-0.92) at cross-validation. Conclusion Easily obtainable patient characteristics can select older men and women from the community who are candidates for echocardiography to confirm or refute heart failure.
NASA Astrophysics Data System (ADS)
Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan
2016-10-01
Cross-sensitivity is a crucial parameter since it detrimentally affect the performance of an accelerometer, especially for a high resolution accelerometer. In this paper, a suite of analytical and finite-elements-method (FEM) models for characterizing the mechanism and features of the cross-sensitivity of a single-axis MOEMS accelerometer composed of a diffraction grating and a micromachined mechanical sensing chip are presented, which have not been systematically investigated yet. The mechanism and phenomena of the cross-sensitivity of this type MOEMS accelerometer based on diffraction grating differ quite a lot from the traditional ones owing to the identical sensing principle. By analyzing the models, some ameliorations and the modified design are put forward to suppress the cross-sensitivity. The modified design, achieved by double sides etching on a specific double-substrate-layer silicon-on-insulator (SOI) wafer, is validated to have a far smaller cross-sensitivity compared with the design previously reported in the literature. Moreover, this design can suppress the cross-sensitivity dramatically without compromising the acceleration sensitivity and resolution.
Riaz, Qaiser; Vögele, Anna; Krüger, Björn; Weber, Andreas
2015-01-01
A number of previous works have shown that information about a subject is encoded in sparse kinematic information, such as the one revealed by so-called point light walkers. With the work at hand, we extend these results to classifications of soft biometrics from inertial sensor recordings at a single body location from a single step. We recorded accelerations and angular velocities of 26 subjects using integrated measurement units (IMUs) attached at four locations (chest, lower back, right wrist and left ankle) when performing standardized gait tasks. The collected data were segmented into individual walking steps. We trained random forest classifiers in order to estimate soft biometrics (gender, age and height). We applied two different validation methods to the process, 10-fold cross-validation and subject-wise cross-validation. For all three classification tasks, we achieve high accuracy values for all four sensor locations. From these results, we can conclude that the data of a single walking step (6D: accelerations and angular velocities) allow for a robust estimation of the gender, height and age of a person. PMID:26703601
Wu, Mingwei; Li, Yan; Fu, Xinmei; Wang, Jinghui; Zhang, Shuwei; Yang, Ling
2014-09-01
Melanin concentrating hormone receptor 1 (MCHR1), a crucial regulator of energy homeostasis involved in the control of feeding and energy metabolism, is a promising target for treatment of obesity. In the present work, the up-to-date largest set of 181 quinoline/quinazoline derivatives as MCHR1 antagonists was subjected to both ligand- and receptor-based three-dimensional quantitative structure-activity (3D-QSAR) analysis applying comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA). The optimal predictable CoMSIA model exhibited significant validity with the cross-validated correlation coefficient (Q²) = 0.509, non-cross-validated correlation coefficient (R²(ncv)) = 0.841 and the predicted correlation coefficient (R²(pred)) = 0.745. In addition, docking studies and molecular dynamics (MD) simulations were carried out for further elucidation of the binding modes of MCHR1 antagonists. MD simulations in both water and lipid bilayer systems were performed. We hope that the obtained models and information may help to provide an insight into the interaction mechanism of MCHR1 antagonists and facilitate the design and optimization of novel antagonists as anti-obesity agents.
Qiu, Chen; Zhu, Hongbin; Ruzicka, Connie; Keire, David; Ye, Hongping
2018-05-15
Penicillins and some non-penicillin β-lactams may cause potentially life-threatening allergic reactions. Thus, possible cross contamination of β-lactams in food or drugs can put people at risk. Therefore, when there is a reasonable possibility that a non-penicillin product could be contaminated by penicillin, the drug products are tested for penicillin contamination. Here, a sensitive and rapid method for simultaneous determination of multiple β-lactam antibiotics using high performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed and validated. Mass spectral acquisition was performed on a Q-Exactive HF mass spectrometer in positive ion mode with parallel reaction monitoring (PRM). The method was validated for seven β-lactam antibiotics including one or two from each class and a synthetic intermediate. The quantification precision and accuracy at 200 ppb were in the range of ± 1.84 to ± 4.56 and - 5.20 to 3.44%, respectively. The limit of detection (LOD) was 0.2 ppb, and the limit of quantitation (LOQ) was 2 ppb with a linear dynamic range (LDR) of 2-2000 ppb for all eight β-lactams. From various drug products, the recoveries of eight β-lactams at 200 and 2 ppb ranged from 93.8 ± 3.2 to 112.1 ± 4.2% and 89.7 ± 4.6 to 110.6 ± 1.9%, respectively. The application of the method for detecting cross contamination of trace β-lactams (0.2 ppb) and for monitoring facility surface cleaning was also investigated. This sensitive and fast method was fit-for-purpose for detecting and quantifying trace amount of β-lactam contamination, monitoring cross contamination in manufacturing processes, and determining potency for regulatory purposes and for quality control.
Cross-cultural adaptation and validation of Persian Achilles tendon Total Rupture Score.
Ansari, Noureddin Nakhostin; Naghdi, Soofia; Hasanvand, Sahar; Fakhari, Zahra; Kordi, Ramin; Nilsson-Helander, Katarina
2016-04-01
To cross-culturally adapt the Achilles tendon Total Rupture Score (ATRS) to Persian language and to preliminary evaluate the reliability and validity of a Persian ATRS. A cross-sectional and prospective cohort study was conducted to translate and cross-culturally adapt the ATRS to Persian language (ATRS-Persian) following steps described in guidelines. Thirty patients with total Achilles tendon rupture and 30 healthy subjects participated in this study. Psychometric properties of floor/ceiling effects (responsiveness), internal consistency reliability, test-retest reliability, standard error of measurement (SEM), smallest detectable change (SDC), construct validity, and discriminant validity were tested. Factor analysis was performed to determine the ATRS-Persian structure. There were no floor or ceiling effects that indicate the content and responsiveness of ATRS-Persian. Internal consistency was high (Cronbach's α 0.95). Item-total correlations exceeded acceptable standard of 0.3 for the all items (0.58-0.95). The test-retest reliability was excellent [(ICC)agreement 0.98]. SEM and SDC were 3.57 and 9.9, respectively. Construct validity was supported by a significant correlation between the ATRS-Persian total score and the Persian Foot and Ankle Outcome Score (PFAOS) total score and PFAOS subscales (r = 0.55-0.83). The ATRS-Persian significantly discriminated between patients and healthy subjects. Explanatory factor analysis revealed 1 component. The ATRS was cross-culturally adapted to Persian and demonstrated to be a reliable and valid instrument to measure functional outcomes in Persian patients with Achilles tendon rupture. II.
Empirical evaluation of data normalization methods for molecular classification
Huang, Huei-Chung
2018-01-01
Background Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers—an increasingly important application of microarrays in the era of personalized medicine. Methods In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. Results In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Conclusion Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy. PMID:29666754
Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R
2006-01-01
Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414
Gao, X X; Zhu, L; Yu, S J; Xu, T
2018-02-25
Objective: To develop the Chinese version of modified body image scale (MBIS) questionnaires, and to validate them in Chinese population. Methods: The original English MBIS questionnaire was translated into Chinese, following the WHO cross-cultural adaptation of health-related quality of life measures. The reliability and validity of the Chinese version of MBIS questionnaires were evaluated in Chinese population, MRKH syndrome patients. Results: Totally 50 patients with MRKH syndrome completed the MBIS and short-form 12-item health survey (SF-12) questionnaires. The Cronbach's alpha of MBIS was 0.741, intraclass correlation coefficients were 0.472-0.815 ( P< 0.01). MBIS scores were positively correlated with SF-12 scores (Spearman correlation coefficient was-0.409, P< 0.01) . Factor analysis showed that MBIS had one common factor. Conclusion: Chinese version of MBIS has high reliability and validity in Chinese population, therefore is suitable for clinic and research.
Alzyoud, Sukaina; Veeranki, Sreenivas P.; Kheirallah, Khalid A.; Shotar, Ali M.; Pbert, Lori
2016-01-01
Introduction: Waterpipe use among adolescents has been increasing progressively. Yet no studies were reported to assess the validity and reliability of nicotine dependence scale. The current study aims to assess the validity and reliability of an Arabic version of the modified Waterpipe Tolerance Questionnaire WTQ among school-going adolescent waterpipe users. Methods: In a cross-sectional study conducted in Jordan, information on waterpipe use among 333 school-going adolescents aged 11-18 years was obtained using the Arabic version of the WTQ. An exploratory factor analysis and correlation matrices were conducted to assess validity and reliability of the WTQ. Results: The WTQ had a 0.73 alpha of internal consistency indicating moderate level of reliability. The scale showed multidimensionality with items loading on two factors, namely waterpipe consumption and morning smoking. Conclusion: This study report nicotine dependence level among school-going adolescents who identify themselves as waterpipe users using the WTQ. PMID:26383198
Cane, James; O'Connor, Denise; Michie, Susan
2012-04-24
An integrative theoretical framework, developed for cross-disciplinary implementation and other behaviour change research, has been applied across a wide range of clinical situations. This study tests the validity of this framework. Validity was investigated by behavioural experts sorting 112 unique theoretical constructs using closed and open sort tasks. The extent of replication was tested by Discriminant Content Validation and Fuzzy Cluster Analysis. There was good support for a refinement of the framework comprising 14 domains of theoretical constructs (average silhouette value 0.29): 'Knowledge', 'Skills', 'Social/Professional Role and Identity', 'Beliefs about Capabilities', 'Optimism', 'Beliefs about Consequences', 'Reinforcement', 'Intentions', 'Goals', 'Memory, Attention and Decision Processes', 'Environmental Context and Resources', 'Social Influences', 'Emotions', and 'Behavioural Regulation'. The refined Theoretical Domains Framework has a strengthened empirical base and provides a method for theoretically assessing implementation problems, as well as professional and other health-related behaviours as a basis for intervention development.
Diagnostic Crossover in Anorexia Nervosa and Bulimia Nervosa: Implications for DSM-V
Eddy, Kamryn T.; Dorer, David J.; Franko, Debra L.; Tahilani, Kavita; Thompson-Brenner, Heather; Herzog, David B.
2011-01-01
Objective The Diagnostic and Statistical Manual of Mental Disorders (DSM) is designed primarily as a clinical tool. Yet high rates of diagnostic “crossover” among the anorexia nervosa subtypes and bulimia nervosa may reflect problems with the validity of the current diagnostic schema, thereby limiting its clinical utility. This study was designed to examine diagnostic crossover longitudinally in anorexia nervosa and bulimia nervosa to inform the validity of the DSM-IV-TR eating disorders classification system. Method A total of 216 women with a diagnosis of anorexia nervosa or bulimia nervosa were followed for 7 years; weekly eating disorder symptom data collected using the Eating Disorder Longitudinal Interval Follow-Up Examination allowed for diagnoses to be made throughout the follow-up period. Results Over 7 years, the majority of women with anorexia nervosa experienced diagnostic crossover: more than half crossed between the restricting and binge eating/purging anorexia nervosa subtypes over time; one-third crossed over to bulimia nervosa but were likely to relapse into anorexia nervosa. Women with bulimia nervosa were unlikely to cross over to anorexia nervosa. Conclusions These findings support the longitudinal distinction of anorexia nervosa and bulimia nervosa but do not support the anorexia nervosa subtyping schema. PMID:18198267
NASA Technical Reports Server (NTRS)
Viswanathan, A. V.; Tamekuni, M.
1973-01-01
Analytical methods based on linear theory are presented for predicting the thermal stresses in and the buckling of heated structures with arbitrary uniform cross section. The structure is idealized as an assemblage of laminated plate-strip elements, curved and planar, and beam elements. Uniaxially stiffened plates and shells of arbitrary cross section are typical examples. For the buckling analysis the structure or selected elements may be subjected to mechanical loads, in additional to thermal loads, in any desired combination of inplane transverse load and axial compression load. The analysis is also applicable to stiffened structures under inplane loads varying through the cross section, as in stiffened shells under bending. The buckling analysis is general and covers all modes of instability. The analysis has been applied to a limited number of problems and the results are presented. These while showing the validity and the applicability of the method do not reflect its full capability.
Sisic, Nedim; Jelicic, Mario; Pehar, Miran; Spasic, Miodrag; Sekulic, Damir
2016-01-01
In basketball, anthropometric status is an important factor when identifying and selecting talents, while agility is one of the most vital motor performances. The aim of this investigation was to evaluate the influence of anthropometric variables and power capacities on different preplanned agility performances. The participants were 92 high-level, junior-age basketball players (16-17 years of age; 187.6±8.72 cm in body height, 78.40±12.26 kg in body mass), randomly divided into a validation and cross-validation subsample. The predictors set consisted of 16 anthropometric variables, three tests of power-capacities (Sargent-jump, broad-jump and medicine-ball-throw) as predictors. The criteria were three tests of agility: a T-Shape-Test; a Zig-Zag-Test, and a test of running with a 180-degree turn (T180). Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between observed and predicted scores, dependent samples t-test between predicted and observed scores; and Bland Altman graphics. Analysis of the variance identified centres being advanced in most of the anthropometric indices, and medicine-ball-throw (all at P<0.05); with no significant between-position-differences for other studied motor performances. Multiple regression models originally calculated for the validation subsample were then cross-validated, and confirmed for Zig-zag-Test (R of 0.71 and 0.72 for the validation and cross-validation subsample, respectively). Anthropometrics were not strongly related to agility performance, but leg length is found to be negatively associated with performance in basketball-specific agility. Power capacities are confirmed to be an important factor in agility. The results highlighted the importance of sport-specific tests when studying pre-planned agility performance in basketball. The improvement in power capacities will probably result in an improvement in agility in basketball athletes, while anthropometric indices should be used in order to identify those athletes who can achieve superior agility performance.
Hevesi, Joseph A.; Istok, Jonathan D.; Flint, Alan L.
1992-01-01
Values of average annual precipitation (AAP) are desired for hydrologic studies within a watershed containing Yucca Mountain, Nevada, a potential site for a high-level nuclear-waste repository. Reliable values of AAP are not yet available for most areas within this watershed because of a sparsity of precipitation measurements and the need to obtain measurements over a sufficient length of time. To estimate AAP over the entire watershed, historical precipitation data and station elevations were obtained from a network of 62 stations in southern Nevada and southeastern California. Multivariate geostatistics (cokriging) was selected as an estimation method because of a significant (p = 0.05) correlation of r = .75 between the natural log of AAP and station elevation. A sample direct variogram for the transformed variable, TAAP = ln [(AAP) 1000], was fitted with an isotropic, spherical model defined by a small nugget value of 5000, a range of 190 000 ft, and a sill value equal to the sample variance of 163 151. Elevations for 1531 additional locations were obtained from topographic maps to improve the accuracy of cokriged estimates. A sample direct variogram for elevation was fitted with an isotropic model consisting of a nugget value of 5500 and three nested transition structures: a Gaussian structure with a range of 61 000 ft, a spherical structure with a range of 70 000 ft, and a quasi-stationary, linear structure. The use of an isotropic, stationary model for elevation was considered valid within a sliding-neighborhood radius of 120 000 ft. The problem of fitting a positive-definite, nonlinear model of coregionalization to an inconsistent sample cross variogram for TAAP and elevation was solved by a modified use of the Cauchy-Schwarz inequality. A selected cross-variogram model consisted of two nested structures: a Gaussian structure with a range of 61 000 ft and a spherical structure with a range of 190 000 ft. Cross validation was used for model selection and for comparing the geostatistical model with six alternate estimation methods. Multivariate geostatistics provided the best cross-validation results.
A new automated method for the determination of cross-section limits in ephemeral gullies
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Ángel Campo-Bescós, Miguel; Casalí, Javier; Giménez, Rafael
2017-04-01
The assessment of gully erosion relies on the estimation of the soil volume enclosed by cross sections limits. Both 3D and 2D methods require the application of a methodology for the determination of the cross-section limits what has been traditionally carried out in two ways: a) by visual inspection of the cross-section by a certain expert operator; b) by the automated identification of thresholds for different geometrical variables such as elevation, slope or plan curvature obtained from the cross-section profile. However, for these last methods, typically, the thresholds are not of general application because they depend on absolute values valid only for the local gully conditions where they were derived. In this communication we evaluate an automated method for cross-section delimitation of ephemeral gullies and compare its performance with the visual assessment provided by five scientists experienced in gully erosion assessment, defining gully width, depth and area for a total of 60 ephemeral gullies cross-sections obtained from field surveys conducted on agricultural plots in Navarra (Spain). The automated method only depends on the calculation of a simple geometrical measurement, which is the bank trapezoid area for every point of each gully bank. This rectangle trapezoid (right-angled trapezoid) is defined by the elevation of a given point, the minimum elevation and the extremes of the cross-section. The gully limit for each bank is determined by the point in the bank with the maximum trapezoid area. The comparison of the estimates among the different expert operators showed large variation coefficients (up to 70%) in a number of cross-sections, larger for cross sections width and area and smaller for cross sections depth. The automated method produced comparable results to those obtained by the experts and was the procedure with the highest average correlation with the rest of the methods for the three dimensional parameters. The errors of the automated method when compared with the average estimate of the experts were occasionally high (up to 40%), in line with the variability found among experts. The automated method showed no apparent systematic errors which approximately followed a normal distribution, although these errors were slightly biased towards overestimation for the depth and area parameters. In conclusion, this study shows that there is not a single definition of gully limits even among gully experts where a large variability was found. The bank trapezoid method was found to be an automated, easy-to-use (readily implementable in a basic excel spread-sheet or programming scripts), threshold-independent procedure to determine consistently gully limits similar to expert-derived estimates. Gully width and area calculations were more prone to errors than gully depth, which was the least sensitive parameter.
Radioactive Quality Evaluation and Cross Validation of Data from the HJ-1A/B Satellites' CCD Sensors
Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai
2013-01-01
Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency. PMID:23881127
Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai
2013-07-05
Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency.
A scaling law for accretion zone sizes
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1987-01-01
Current theories of runaway planetary accretion require small random velocities of the accreted particles. Two body gravitational accretion cross sections which ignore tidal perturbations of the Sun are not valid for the slow encounters which occur at low relative velocities. Wetherill and Cox have studied accretion cross sections for rocky protoplanets orbiting at 1 AU. Using analytic methods based on Hill's lunar theory, one can scale these results for protoplanets that occupy the same fraction of their Hill sphere as does a rocky body at 1 AU. Generalization to bodies of different sizes is achieved here by numerical integrations of the three-body problem. Starting at initial positions far from the accreting body, test particles are allowed to encounter the body once, and the cross section is computed. A power law is found relating the cross section to the radius of the accreting body (of fixed mass).
Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling
Ma, Junqing; Song, Aiguo
2013-01-01
Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144
Temporal cross-correlation asymmetry and departure from equilibrium in a bistable chemical system.
Bianca, C; Lemarchand, A
2014-06-14
This paper aims at determining sustained reaction fluxes in a nonlinear chemical system driven in a nonequilibrium steady state. The method relies on the computation of cross-correlation functions for the internal fluctuations of chemical species concentrations. By employing Langevin-type equations, we derive approximate analytical formulas for the cross-correlation functions associated with nonlinear dynamics. Kinetic Monte Carlo simulations of the chemical master equation are performed in order to check the validity of the Langevin equations for a bistable chemical system. The two approaches are found in excellent agreement, except for critical parameter values where the bifurcation between monostability and bistability occurs. From the theoretical point of view, the results imply that the behavior of cross-correlation functions cannot be exploited to measure sustained reaction fluxes in a specific nonlinear system without the prior knowledge of the associated chemical mechanism and the rate constants.
Denova-Gutiérrez, Edgar; Ramírez-Silva, Ivonne; Rodríguez-Ramírez, Sonia; Jiménez-Aguilar, Alejandra; Shamah-Levy, Teresa; Rivera-Dommarco, Juan A
2016-01-01
To assess the validity of a 140-item semiquantitative food frequency questionnaire (SFFQ), in Mexican adolescents and adults. Dietary intakes using a SFFQ and two 24-hour dietary recalls (24DRs), in nonconsecutive days during the same week were measured from 178 adolescents and 230 adults participating in the Mexican National Health and Nutrition Survey-2012.Validity was evaluated using correlation coefficients (CC),deattenuated CC, linear regression models, cross-classification analysis, and the Bland-Altman method. In adults, deattenuated correlation coefficients between the SFFQ and the 24DRs ranged from 0.30 for folate to 0.61 for saturated fat. In addition, 63% adults and 62% adolescents were classified in the same and adjacent quartile of nutrient intake when comparing data from SFFQ and 24DRs. The SFFQ had moderate validity for energy, macronutrients and micronutrients. It also had good validity to rank individuals according to their dietary intake of different nutrients.
Multivariate Adaptive Regression Splines (Preprint)
1990-08-01
fold cross -validation would take about ten time as long, and MARS is not all that fast to begin with. Friedman has a number of examples showing...standardized mean squared error of prediction (MSEP), the generalized cross validation (GCV), and the number of selected terms (TERMS). In accordance with...and mi= 10 case were almost exclusively spurious cross product terms and terms involving the nuisance variables x6 through xlo. This large number of
A novel multi-target regression framework for time-series prediction of drug efficacy.
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-18
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.
A novel multi-target regression framework for time-series prediction of drug efficacy
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-01
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186
Predicting cancerlectins by the optimal g-gap dipeptides
NASA Astrophysics Data System (ADS)
Lin, Hao; Liu, Wei-Xin; He, Jiao; Liu, Xin-Hui; Ding, Hui; Chen, Wei
2015-12-01
The cancerlectin plays a key role in the process of tumor cell differentiation. Thus, to fully understand the function of cancerlectin is significant because it sheds light on the future direction for the cancer therapy. However, the traditional wet-experimental methods were money- and time-consuming. It is highly desirable to develop an effective and efficient computational tool to identify cancerlectins. In this study, we developed a sequence-based method to discriminate between cancerlectins and non-cancerlectins. The analysis of variance (ANOVA) was used to choose the optimal feature set derived from the g-gap dipeptide composition. The jackknife cross-validated results showed that the proposed method achieved the accuracy of 75.19%, which is superior to other published methods. For the convenience of other researchers, an online web-server CaLecPred was established and can be freely accessed from the website http://lin.uestc.edu.cn/server/CalecPred. We believe that the CaLecPred is a powerful tool to study cancerlectins and to guide the related experimental validations.
Dutch population specific sex estimation formulae using the proximal femur.
Colman, K L; Janssen, M C L; Stull, K E; van Rijn, R R; Oostra, R J; de Boer, H H; van der Merwe, A E
2018-05-01
Sex estimation techniques are frequently applied in forensic anthropological analyses of unidentified human skeletal remains. While morphological sex estimation methods are able to endure population differences, the classification accuracy of metric sex estimation methods are population-specific. No metric sex estimation method currently exists for the Dutch population. The purpose of this study is to create Dutch population specific sex estimation formulae by means of osteometric analyses of the proximal femur. Since the Netherlands lacks a representative contemporary skeletal reference population, 2D plane reconstructions, derived from clinical computed tomography (CT) data, were used as an alternative source for a representative reference sample. The first part of this study assesses the intra- and inter-observer error, or reliability, of twelve measurements of the proximal femur. The technical error of measurement (TEM) and relative TEM (%TEM) were calculated using 26 dry adult femora. In addition, the agreement, or accuracy, between the dry bone and CT-based measurements was determined by percent agreement. Only reliable and accurate measurements were retained for the logistic regression sex estimation formulae; a training set (n=86) was used to create the models while an independent testing set (n=28) was used to validate the models. Due to high levels of multicollinearity, only single variable models were created. Cross-validated classification accuracies ranged from 86% to 92%. The high cross-validated classification accuracies indicate that the developed formulae can contribute to the biological profile and specifically in sex estimation of unidentified human skeletal remains in the Netherlands. Furthermore, the results indicate that clinical CT data can be a valuable alternative source of data when representative skeletal collections are unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
Exploring the cross-level impact of market orientation on nursing innovation in hospitals.
Weng, Rhay-Hung; Huang, Ching-Yuan; Lin, Tzu-En
2013-01-01
Recently, many hospitals have been enthusiastically encouraging nurses to pursue nursing innovation to improve health care quality and increase nursing productivity by proposing innovative training methods, products, services, care skills, and care methods. This study tried to explore the cross-level impact of market orientation on nursing innovation. In our study, 3 to 7 nurses and 1 manager were selected from each nursing team to act as respondents. The questionnaire survey began after the managers of each nursing team and the nurses had been anonymously coded and paired up in Taiwan in 2009-2010. A total of 808 valid questionnaires were collected, including 172 valid teams. Hierarchical linear modeling was used for the analysis. Nursing innovation is the sum of knowledge creation, innovation behavior, and innovation diffusion displayed by the nurses during nursing care. The level of knowledge creation, as perceived by the nurses, was the highest, whereas the level of innovation diffusion was the lowest. Results of hierarchical linear modeling showed that only competitor orientation yielded a significant positive influence on knowledge creation, innovation behavior, or innovation diffusion. The r values were 0.53, 0.49, and 0.61, respectively. Customer orientation and interfunctional coordination did not have significant effects on nursing innovation. Hospital nurses exhibited better performance in knowledge creation than in innovation behavior and diffusion. Only competitor orientation had a significantly positive and cross-level influence on nursing innovation. However, competitor orientation was observed to be the lowest dimension of market orientation, which indicates that this factor should be the focus when improving nursing innovations in the future. Therefore, managers should continually understand the strategies, advantages, and methods of their competitors.
de Sousa, Carla Suellen Pires; Castro, Régia Christina Moura Barbosa; Pinheiro, Ana Karina Bezerra; Moura, Escolástica Rejane Ferreira; Almeida, Paulo César; Aquino, Priscila de Souza
2018-01-01
ABSTRACT Objective: translate and adapt the Condom Self-Efficacy Scale to Portuguese in the Brazilian context. The scale originated in the United States and measures self-efficacy in condom use. Method: methodological study in two phases: translation, cross-cultural adaptation and verification of psychometric properties. The translation and adaptation process involved four translators, one mediator of the synthesis and five health professionals. The content validity was verified using the Content Validation Index, based on 22 experts’ judgments. Forty subjects participated in the pretest, who contributed to the understanding of the scale items. The scale was applied to 209 students between 13 and 26 years of age from a school affiliated with the state-owned educational network. The reliability was analyzed by means of Cronbach’s alpha. Results: the Portuguese version of the scale obtained a Cronbach’s alpha coefficient of 0.85 and the total mean score was 68.1 points. A statistically significant relation was found between the total scale and the variables not having children (p= 0.038), condom use (p= 0.008) and condom use with fixed partner (p=0.036). Conclusion: the Brazilian version of the Condom Self-Efficacy Scale is a valid and reliable tool to verify the self-efficacy in condom use among adolescents and young adults. PMID:29319748
Lindahl, Marianne; Andersen, Signe; Joergensen, Annette; Frandsen, Christian; Jensen, Liselotte; Benedikz, Eirikur
2018-01-01
The aim of this study was to translate and culturally adapt the Short Musculoskeletal Function Assessment (SMFA) into Danish (SMFA-DK) and assess the psychometric properties. SMFA was translated and cross-culturally adapted according to a standardized procedure. Minor changes in the wording in three items were made to adapt to Danish conditions. Acute patients (n = 201) and rehabilitation patients (n = 231) with musculoskeletal problems aged 18-87 years were included. The following analysis were made to evaluate psychometric quality of SMFA-DK: Reliability with Chronbach's alpha, content validity as coding according to the International Classification of Functioning, Disability and Health (ICF), floor/ceiling effects, construct validity as factor analysis, correlations between SMFA-DK and Short Form 36 and also known group method. Responsiveness and effect size were calculated. Cronbach's alpha values were between 0.79 and 0.94. SMFA-DK captured all components of the ICF, and there were no floor/ceiling effects. Factor analysis demonstrated four subscales. SMFA-DK correlated good with the SF-36 subscales for the rehabilitation patients and lower for the newly injured patients. Effect sizes were excellent and better for SMFA-DK than for SF-36. The study indicates that SMFA-DK can be a valid and responsive measure of outcome in rehabilitation settings.
HA, Mei; QIAN, Xiaoling; YANG, Hong; HUANG, Jichun; LIU, Changjiang
2016-01-01
Background: The public’s cognition of stroke and responses to stroke symptoms are important to prevent complications and decrease the mortality when stroke occurs. The aim of study was to develop and validate the Chinese version of the Stroke Action Test (C-STAT) in a Chinese population. Methods: This study was rigorously implemented with the published guideline for the translation, adaptation and validation of instruments for the cross-cultural use in healthcare care research. A cross-sectional study was performed among 328 stroke patients and family members in the Department of Neurology in the Second Hospital of Lanzhou University, Gansu province, China in 2014. Results: The Chinese version of the instrument showed favorable content equivalence with the source version. Values of Cronbach’s alpha and test-retest reliability of the C-STAT were 0.88 and 0.86, respectively. Principal component analysis supported four-factor solutions of the C-STAT. Criterion-related validity showed that the C-STAT was a significant predictor of the 7-item stroke symptom scores (R = 0.77; t = 21.74, P< 0.001). Conclusion: The C-STAT is an intelligible and brief psychometrical tool to assess individuals’ knowledge of the appropriate responses to stroke symptoms in Chinese populations. It could also be used by health care providers to assess educational programs on stroke prevention. PMID:28053925
Gould, Gillian Sandra; Watt, Kerrianne; Cadet-James, Yvonne; Clough, Alan R.
2014-01-01
Objective To validate, for the first time, the Risk Behaviour Diagnosis (RBD) Scale for Aboriginal Australian tobacco smokers, based on the Extended Parallel Process Model (EPPM). Despite high smoking prevalence, little is known about how Indigenous peoples assess their smoking risks. Methods In a cross-sectional study of 121 aboriginal smokers aged 18–45 in regional New South Wales, in 2014, RBD subscales were assessed for internal consistency. Scales included measures of perceived threat (susceptibility to and severity of smoking risks) and perceived efficacy (response efficacy and self-efficacy for quitting). An Aboriginal community panel appraised face and content validity. EPPM constructs of danger control (protective motivation) and fear control (defensive motivation) were assessed for cogency. Results Scales had acceptable to good internal consistency (Cronbach's alpha = 0.65–1.0). Most participants demonstrated high-perceived threat (77%, n = 93); and half had high-perceived efficacy (52%, n = 63). High-perceived efficacy with high-threat appeared consistent with danger control dominance; low-perceived efficacy with high-threat was consistent with fear control dominance. Conclusions In these Aboriginal smokers of reproductive age, the RBD Scale appeared valid and reliable. Further research is required to assess whether the RBD Scale and EPPM can predict quit attempts and assist with tailored approaches to counselling and targeted health promotion campaigns. PMID:26844043
Onojima, Takayuki; Goto, Takahiro; Mizuhara, Hiroaki; Aoyagi, Toshio
2018-01-01
Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results.
An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers
Sun, Kewen; Jin, Tian; Yang, Dongkai
2015-01-01
In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704
From cutting-edge pointwise cross-section to groupwise reaction rate: A primer
NASA Astrophysics Data System (ADS)
Sublet, Jean-Christophe; Fleming, Michael; Gilbert, Mark R.
2017-09-01
The nuclear research and development community has a history of using both integral and differential experiments to support accurate lattice-reactor, nuclear reactor criticality and shielding simulations, as well as verification and validation efforts of cross sections and emitted particle spectra. An important aspect to this type of analysis is the proper consideration of the contribution of the neutron spectrum in its entirety, with correct propagation of uncertainties and standard deviations derived from Monte Carlo simulations, to the local and total uncertainty in the simulated reactions rates (RRs), which usually only apply to one application at a time. This paper identifies deficiencies in the traditional treatment, and discusses correct handling of the RR uncertainty quantification and propagation, including details of the cross section components in the RR uncertainty estimates, which are verified for relevant applications. The methodology that rigorously captures the spectral shift and cross section contributions to the uncertainty in the RR are discussed with quantified examples that demonstrate the importance of the proper treatment of the spectrum profile and cross section contributions to the uncertainty in the RR and subsequent response functions. The recently developed inventory code FISPACT-II, when connected to the processed nuclear data libraries TENDL-2015, ENDF/B-VII.1, JENDL-4.0u or JEFF-3.2, forms an enhanced multi-physics platform providing a wide variety of advanced simulation methods for modelling activation, transmutation, burnup protocols and simulating radiation damage sources terms. The system has extended cutting-edge nuclear data forms, uncertainty quantification and propagation methods, which have been the subject of recent integral and differential, fission, fusion and accelerators validation efforts. The simulation system is used to accurately and predictively probe, understand and underpin a modern and sustainable understanding of the nuclear physics that is so important for many areas of science and technology; advanced fission and fuel systems, magnetic and inertial confinement fusion, high energy, accelerator physics, medical application, isotope production, earth exploration, astrophysics and homeland security.
Gaudin, Valérie; Hedou, Celine; Soumet, Christophe; Verdon, Eric
2016-01-01
The Evidence Investigator™ system (Randox, UK) is a biochip and semi-automated system. The microarray kit II (AM II) is capable of detecting several compounds belonging to different families of antibiotics: quinolones, ceftiofur, thiamphenicol, streptomycin, tylosin and tetracyclines. The performance of this innovative system was evaluated for the detection of antibiotic residues in new matrices, in muscle of different animal species and in aquaculture products. The method was validated according to the European Decision No. EC/2002/657 and the European guideline for the validation of screening methods, which represents a complete initial validation. The false-positive rate was equal to 0% in muscle and in aquaculture products. The detection capabilities CCβ for 12 validated antibiotics (enrofloxacin, difloxacin, ceftiofur, desfuroyl ceftiofur cysteine disulfide, thiamphenicol, florfenicol, tylosin, tilmicosin, streptomycin, dihydrostreptomycin, tetracycline, doxycycline) were all lower than the respective maximum residue limits (MRLs) in muscle from different animal origins (bovine, ovine, porcine, poultry). No cross-reactions were observed with other antibiotics, neither with the six detected families nor with other families of antibiotics. The AM II kit could be applied to aquaculture products but with higher detection capabilities from those in muscle. The detection capabilities CCβ in aquaculture products were respectively at 0.25, 0.10 and 0.5 of the respective MRL in aquaculture products for enrofloxacin, tylosin and oxytetracycline. The performance of the AM II kit has been compared with other screening methods and with the performance characteristics previously determined in honey.
NASA Astrophysics Data System (ADS)
Ramesh, Adepu; Ashritha, Kilari; Kumar, Molugaram
2018-04-01
Walking has always been a prime source of human mobility for short distance travel. Traffic congestion has become a major problem for safe pedestrian crossing in most of the metropolitan cities. This has emphasized for providing a sufficient pedestrian gap for safe crossing on urban road. The present works aims in understanding factors that influence pedestrian crossing behaviour. Four locations were chosen for identification of pedestrian crossing behaviour, gap characteristics, waiting time etc., in Hyderabad city. From the study it was observed that pedestrian behaviour and crossing patterns are different and is influenced by land use pattern. A gap acceptance model was developed from the data for improving pedestrian safety at mid-block location; the model was validated using the existing data. Pedestrian delay was estimated at intersection using Highway Capacity Manual (HCM). It was observed that field delays are less when compared to delay arrived from HCM method.
Reiss, Philip T
2015-08-01
The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs. Copyright © 2015 Elsevier Inc. All rights reserved.
Accelerating cross-validation with total variation and its application to super-resolution imaging
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki
2017-12-01
We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.
The cross-cultural validity of posttraumatic stress disorder: implications for DSM-5.
Hinton, Devon E; Lewis-Fernández, Roberto
2011-09-01
There is considerable debate about the cross-cultural applicability of the posttraumatic stress disorder (PTSD) category as currently specified. Concerns include the possible status of PTSD as a Western culture-bound disorder and the validity of individual items and criteria thresholds. This review examines various types of cross-cultural validity of the PTSD criteria as defined in DSM-IV-TR, and presents options and preliminary recommendations to be considered for DSM-5. Searches were conducted of the mental health literature, particularly since 1994, regarding cultural-, race-, or ethnicity-related factors that might limit the universal applicability of the diagnostic criteria of PTSD in DSM-IV-TR and the possible criteria for DSM-5. Substantial evidence of the cross-cultural validity of PTSD was found. However, evidence of cross-cultural variability in certain areas suggests the need for further research: the relative salience of avoidance/numbing symptoms, the role of the interpretation of trauma-caused symptoms in shaping symptomatology, and the prevalence of somatic symptoms. This review also indicates the need to modify certain criteria, such as the items on distressing dreams and on foreshortened future, to increase their cross-cultural applicability. Text additions are suggested to increase the applicability of the manual across cultural contexts: specifying that cultural syndromes-such as those indicated in the DSM-IV-TR Glossary-may be a prominent part of the trauma response in certain cultures, and that those syndromes may influence PTSD symptom salience and comorbidity. The DSM-IV-TR PTSD category demonstrates various types of validity. Criteria modification and textual clarifications are suggested to further improve its cross-cultural applicability. © 2010 Wiley-Liss, Inc.
Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research
SALE, JOANNA E. M.; LOHFELD, LYNNE H.; BRAZIL, KEVIN
2015-01-01
Health care research includes many studies that combine quantitative and qualitative methods. In this paper, we revisit the quantitative-qualitative debate and review the arguments for and against using mixed-methods. In addition, we discuss the implications stemming from our view, that the paradigms upon which the methods are based have a different view of reality and therefore a different view of the phenomenon under study. Because the two paradigms do not study the same phenomena, quantitative and qualitative methods cannot be combined for cross-validation or triangulation purposes. However, they can be combined for complementary purposes. Future standards for mixed-methods research should clearly reflect this recommendation. PMID:26523073
A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.
Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W
2009-03-01
We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.
A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics
Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.
2009-01-01
We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007
The prediction of palmitoylation site locations using a multiple feature extraction method.
Shi, Shao-Ping; Sun, Xing-Yu; Qiu, Jian-Ding; Suo, Sheng-Bao; Chen, Xiang; Huang, Shu-Yun; Liang, Ru-Ping
2013-03-01
As an extremely important and ubiquitous post-translational lipid modification, palmitoylation plays a significant role in a variety of biological and physiological processes. Unlike other lipid modifications, protein palmitoylation and depalmitoylation are highly dynamic and can regulate both protein function and localization. The dynamic nature of palmitoylation is poorly understood because of the limitations in current assay methods. The in vivo or in vitro experimental identification of palmitoylation sites is both time consuming and expensive. Due to the large volume of protein sequences generated in the post-genomic era, it is extraordinarily important in both basic research and drug discovery to rapidly identify the attributes of a new protein's palmitoylation sites. In this work, a new computational method, WAP-Palm, combining multiple feature extraction, has been developed to predict the palmitoylation sites of proteins. The performance of the WAP-Palm model is measured herein and was found to have a sensitivity of 81.53%, a specificity of 90.45%, an accuracy of 85.99% and a Matthews correlation coefficient of 72.26% in 10-fold cross-validation test. The results obtained from both the cross-validation and independent tests suggest that the WAP-Palm model might facilitate the identification and annotation of protein palmitoylation locations. The online service is available at http://bioinfo.ncu.edu.cn/WAP-Palm.aspx. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Van den Branden, Sigrid; Van den Broucke, Stephan; Leroy, Roos; Declerck, Dominique; Hoppenbrouwers, Karel
2015-01-01
Objective: This study aimed to test the predictive validity of the Theory of Planned Behaviour (TPB) when applied to the oral health-related behaviours of parents towards their preschool children in a cross-sectional and prospective design over a 5-year interval. Methods: Data for this study were obtained from parents of 1,057 children born…
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Adaptive local linear regression with application to printer color management.
Gupta, Maya R; Garcia, Eric K; Chin, Erika
2008-06-01
Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.
Bilek, Edda; Ruf, Matthias; Schäfer, Axel; Akdeniz, Ceren; Calhoun, Vince D.; Schmahl, Christian; Demanuele, Charmaine; Tost, Heike; Kirsch, Peter; Meyer-Lindenberg, Andreas
2015-01-01
Social interactions are fundamental for human behavior, but the quantification of their neural underpinnings remains challenging. Here, we used hyperscanning functional MRI (fMRI) to study information flow between brains of human dyads during real-time social interaction in a joint attention paradigm. In a hardware setup enabling immersive audiovisual interaction of subjects in linked fMRI scanners, we characterize cross-brain connectivity components that are unique to interacting individuals, identifying information flow between the sender’s and receiver’s temporoparietal junction. We replicate these findings in an independent sample and validate our methods by demonstrating that cross-brain connectivity relates to a key real-world measure of social behavior. Together, our findings support a central role of human-specific cortical areas in the brain dynamics of dyadic interactions and provide an approach for the noninvasive examination of the neural basis of healthy and disturbed human social behavior with minimal a priori assumptions. PMID:25848050
Moscati, Arden; Verhulst, Brad; McKee, Kevin; Silberg, Judy; Eaves, Lindon
2018-01-01
Understanding the factors that contribute to behavioral traits is a complex task, and partitioning variance into latent genetic and environmental components is a useful beginning, but it should not also be the end. Many constructs are influenced by their contextual milieu, and accounting for background effects (such as gene-environment correlation) is necessary to avoid bias. This study introduces a method for examining the interplay between traits, in a longitudinal design using differential items in sibling pairs. The model is validated via simulation and power analysis, and we conclude with an application to paternal praise and ADHD symptoms in a twin sample. The model can help identify what type of genetic and environmental interplay may contribute to the dynamic relationship between traits using a cross-lagged panel framework. Overall, it presents a way to estimate and explicate the developmental interplay between a set of traits, free from many common sources of bias.
Li, Zhenghua; Cheng, Fansheng; Xia, Zhining
2011-01-01
The chemical structures of 114 polycyclic aromatic sulfur heterocycles (PASHs) have been studied by molecular electronegativity-distance vector (MEDV). The linear relationships between gas chromatographic retention index and the MEDV have been established by a multiple linear regression (MLR) model. The results of variable selection by stepwise multiple regression (SMR) and the powerful predictive abilities of the optimization model appraised by leave-one-out cross-validation showed that the optimization model with the correlation coefficient (R) of 0.994 7 and the cross-validated correlation coefficient (Rcv) of 0.994 0 possessed the best statistical quality. Furthermore, when the 114 PASHs compounds were divided into calibration and test sets in the ratio of 2:1, the statistical analysis showed our models possesses almost equal statistical quality, the very similar regression coefficients and the good robustness. The quantitative structure-retention relationship (QSRR) model established may provide a convenient and powerful method for predicting the gas chromatographic retention of PASHs.
Ko, Jupil; Rosen, Adam B; Brown, Cathleen N
2015-12-01
The Cumberland Ankle Instability Tool (CAIT) is a valid and reliable patient reported outcome used to assess the presence and severity of chronic ankle instability (CAI). The CAIT has been cross-culturally adapted into other languages for use in non-English speaking populations. However, there are no valid questionnaires to assess CAI in individuals who speak Korean. The purpose of this study was to translate, cross-culturally adapt, and validate the CAIT, for use in a Korean-speaking population with CAI. Cross-cultural reliability study. The CAIT was cross-culturally adapted into Korean according to accepted guidelines and renamed the Cumberland Ankle Instability Tool-Korean (CAIT-K). Twenty-three participants (12 males, 11 females) who were bilingual in English and Korean were recruited and completed the original and adapted versions to assess agreement between versions. An additional 168 national level Korean athletes (106 male, 62 females; age = 20.3 ± 1.1 yrs), who participated in ≥ 90 minutes of physical activity per week, completed the final version of the CAIT-K twice within 14 days. Their completed questionnaires were assessed for internal consistency, test-retest reliability, criterion validity, and construct validity. For bilingual participants, intra-class correlation coefficients (ICC2,1) between the CAIT and the CAIT-K for test-retest reliability were 0.95 (SEM=1.83) and 0.96 (SEM=1.50) in right and left limbs, respectively. The Cronbach's alpha coefficients were 0.92 and 0.90 for the CAIT-K in right and left limbs, respectively. For native Korean speakers, the CAIT-K had high internal consistency (Cronbach's α=0.89) and intra-class correlation coefficient (ICC2,1 = 0.94, SEM=1.72), correlation with the physical component score (rho=0.70, p = 0.001) of the Short-Form Health Survey (SF-36), and the Kaiser-Meyer-Olkin score was 0.87. The original CAIT was translated, cross-culturally adapted, and validated from English to Korean. The CAIT-K appears to be valid and reliable and could be useful in assessing the Korean speaking population with CAI.