Sample records for cross validation approach

  1. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  2. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  3. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  4. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  5. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  6. Comprehensive Assessment of Emotional Disturbance: A Cross-Validation Approach

    ERIC Educational Resources Information Center

    Fisher, Emily S.; Doyon, Katie E.; Saldana, Enrique; Allen, Megan Redding

    2007-01-01

    Assessing a student for emotional disturbance is a serious and complex task given the stigma of the label and the ambiguities of the federal definition. One way that school psychologists can be more confident in their assessment results is to cross validate data from different sources using the RIOT approach (Review, Interview, Observe, Test).…

  7. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  8. Cross-validated detection of crack initiation in aerospace materials

    NASA Astrophysics Data System (ADS)

    Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios

    2014-03-01

    A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.

  9. Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach

    PubMed Central

    2014-01-01

    Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population settings. PMID:25285151

  10. Cross-Cultural Validation of the Five-Factor Structure of Social Goals: A Filipino Investigation

    ERIC Educational Resources Information Center

    King, Ronnel B.; Watkins, David A.

    2012-01-01

    The aim of the present study was to test the cross-cultural validity of the five-factor structure of social goals that Dowson and McInerney proposed. Using both between-network and within-network approaches to construct validation, 1,147 Filipino high school students participated in the study. Confirmatory factor analysis indicated that the…

  11. Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.

    PubMed

    Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L

    2017-01-01

    A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.

  12. Validating a Spanish Version of the PIMRS: Application in National and Cross-National Research on Instructional Leadership

    ERIC Educational Resources Information Center

    Fromm, Germán; Hallinger, Philip; Volante, Paulo; Wang, Wen Chung

    2017-01-01

    The purposes of this study were to report on a systematic approach to validating a Spanish version of the Principal Instructional Management Rating Scale and then to apply the scale in a cross-national comparison of principal instructional leadership. The study yielded a validated Spanish language version of the PIMRS Teacher Form and offers a…

  13. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs

    ERIC Educational Resources Information Center

    Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.

    2005-01-01

    The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…

  15. The Learning Transfer System Inventory (LTSI) in Ukraine: The Cross-Cultural Validation of the Instrument

    ERIC Educational Resources Information Center

    Yamkovenko, Bogdan V.; Holton, Elwood, III; Bates, R. A.

    2007-01-01

    Purpose: The purpose of this research is to expand cross-cultural research and validate the Learning Transfer System Inventory in Ukraine. The researchers seek to translate the LTSI into Ukrainian and investigate the internal structure of this translated version of the questionnaire. Design/methodology/approach: The LTSI is translated into…

  16. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  17. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  18. Features of Cross-Correlation Analysis in a Data-Driven Approach for Structural Damage Assessment

    PubMed Central

    Camacho Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis

    2018-01-01

    This work discusses the advantage of using cross-correlation analysis in a data-driven approach based on principal component analysis (PCA) and piezodiagnostics to obtain successful diagnosis of events in structural health monitoring (SHM). In this sense, the identification of noisy data and outliers, as well as the management of data cleansing stages can be facilitated through the implementation of a preprocessing stage based on cross-correlation functions. Additionally, this work evidences an improvement in damage detection when the cross-correlation is included as part of the whole damage assessment approach. The proposed methodology is validated by processing data measurements from piezoelectric devices (PZT), which are used in a piezodiagnostics approach based on PCA and baseline modeling. Thus, the influence of cross-correlation analysis used in the preprocessing stage is evaluated for damage detection by means of statistical plots and self-organizing maps. Three laboratory specimens were used as test structures in order to demonstrate the validity of the methodology: (i) a carbon steel pipe section with leak and mass damage types, (ii) an aircraft wing specimen, and (iii) a blade of a commercial aircraft turbine, where damages are specified as mass-added. As the main concluding remark, the suitability of cross-correlation features combined with a PCA-based piezodiagnostic approach in order to achieve a more robust damage assessment algorithm is verified for SHM tasks. PMID:29762505

  19. Features of Cross-Correlation Analysis in a Data-Driven Approach for Structural Damage Assessment.

    PubMed

    Camacho Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Quiroga, Jabid

    2018-05-15

    This work discusses the advantage of using cross-correlation analysis in a data-driven approach based on principal component analysis (PCA) and piezodiagnostics to obtain successful diagnosis of events in structural health monitoring (SHM). In this sense, the identification of noisy data and outliers, as well as the management of data cleansing stages can be facilitated through the implementation of a preprocessing stage based on cross-correlation functions. Additionally, this work evidences an improvement in damage detection when the cross-correlation is included as part of the whole damage assessment approach. The proposed methodology is validated by processing data measurements from piezoelectric devices (PZT), which are used in a piezodiagnostics approach based on PCA and baseline modeling. Thus, the influence of cross-correlation analysis used in the preprocessing stage is evaluated for damage detection by means of statistical plots and self-organizing maps. Three laboratory specimens were used as test structures in order to demonstrate the validity of the methodology: (i) a carbon steel pipe section with leak and mass damage types, (ii) an aircraft wing specimen, and (iii) a blade of a commercial aircraft turbine, where damages are specified as mass-added. As the main concluding remark, the suitability of cross-correlation features combined with a PCA-based piezodiagnostic approach in order to achieve a more robust damage assessment algorithm is verified for SHM tasks.

  20. A nearest neighbor approach for automated transporter prediction and categorization from protein sequences.

    PubMed

    Li, Haiquan; Dai, Xinbin; Zhao, Xuechun

    2008-05-01

    Membrane transport proteins play a crucial role in the import and export of ions, small molecules or macromolecules across biological membranes. Currently, there are a limited number of published computational tools which enable the systematic discovery and categorization of transporters prior to costly experimental validation. To approach this problem, we utilized a nearest neighbor method which seamlessly integrates homologous search and topological analysis into a machine-learning framework. Our approach satisfactorily distinguished 484 transporter families in the Transporter Classification Database, a curated and representative database for transporters. A five-fold cross-validation on the database achieved a positive classification rate of 72.3% on average. Furthermore, this method successfully detected transporters in seven model and four non-model organisms, ranging from archaean to mammalian species. A preliminary literature-based validation has cross-validated 65.8% of our predictions on the 11 organisms, including 55.9% of our predictions overlapping with 83.6% of the predicted transporters in TransportDB.

  1. Future Performance Trend Indicators: A Current Value Approach to Human Resources Accounting. Report III. Multivariate Predictions of Organizational Performance Across Time.

    ERIC Educational Resources Information Center

    Pecorella, Patricia A.; Bowers, David G.

    Multiple regression in a double cross-validated design was used to predict two performance measures (total variable expense and absence rate) by multi-month period in five industrial firms. The regressions do cross-validate, and produce multiple coefficients which display both concurrent and predictive effects, peaking 18 months to two years…

  2. Validation of the Technology Acceptance Measure for Pre-Service Teachers (TAMPST) on a Malaysian Sample: A Cross-Cultural Study

    ERIC Educational Resources Information Center

    Teo, Timothy

    2010-01-01

    Purpose: The purpose of this paper is to assess the cross-cultural validity of the technology acceptance measure for pre-service teachers (TAMPST) on a Malaysian sample. Design/methodology/approach: A total of 193 pre-service teachers from a Malaysian university completed a survey questionnaire measuring their responses to five constructs in the…

  3. Cross-National Prevalence of Traditional Bullying, Traditional Victimization, Cyberbullying and Cyber-Victimization: Comparing Single-Item and Multiple-Item Approaches of Measurement

    ERIC Educational Resources Information Center

    Yanagida, Takuya; Gradinger, Petra; Strohmeier, Dagmar; Solomontos-Kountouri, Olga; Trip, Simona; Bora, Carmen

    2016-01-01

    Many large-scale cross-national studies rely on a single-item measurement when comparing prevalence rates of traditional bullying, traditional victimization, cyberbullying, and cyber-victimization between countries. However, the reliability and validity of single-item measurement approaches are highly problematic and might be biased. Data from…

  4. Role of Imaging Specrometer Data for Model-based Cross-calibration of Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis John

    2014-01-01

    Site characterization benefits from imaging spectrometry to determine spectral bi-directional reflectance of a well-understood surface. Cross calibration approaches, uncertainties, role of imaging spectrometry, model-based site characterization, and application to product validation.

  5. Pitfalls in Prediction Modeling for Normal Tissue Toxicity in Radiation Therapy: An Illustration With the Individual Radiation Sensitivity and Mammary Carcinoma Risk Factor Investigation Cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert

    Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less

  6. Cross-Cultural Analyses of Determinants of Quality of Life and Mental Health: Results from the Eurohis Study

    ERIC Educational Resources Information Center

    Schmidt, Silke; Power, Mick

    2006-01-01

    Recent projects on international instrument development have produced a wide array of health indicators that may be used for cross-cultural field-testing, however more information on their cross-cultural performance in relation to health determinants is necessary. The current study approaches one step for international conceptual validation by…

  7. Cross-Cultural Career Psychology: Comment on Fouad, Harmon, and Borgen (1997) and Tracey, Watanabe, and Schneider (1997).

    ERIC Educational Resources Information Center

    Leong, Frederick T. L.

    1997-01-01

    Uses the theoretical framework of cultural validity and cultural specificity in career psychology to comment on theoretical and methodological issues raised by two articles on cross-cultural career psychology. Discusses the distinction between etic and emic approaches to cross-cultural research and the role of cultural context in understanding…

  8. Cross Validation of Two Partitioning-Based Sampling Approaches in Mesocosms Containing PCB Contaminated Field Sediment, Biota, and Activated Carbon Amendment.

    PubMed

    Schmidt, Stine N; Wang, Alice P; Gidley, Philip T; Wooley, Allyson H; Lotufo, Guilherme R; Burgess, Robert M; Ghosh, Upal; Fernandez, Loretta A; Mayer, Philipp

    2017-09-05

    The Gold Standard for determining freely dissolved concentrations (C free ) of hydrophobic organic compounds in sediment interstitial water would be in situ deployment combined with equilibrium sampling, which is generally difficult to achieve. In the present study, ex situ equilibrium sampling with multiple thicknesses of silicone and in situ pre-equilibrium sampling with low density polyethylene (LDPE) loaded with performance reference compounds were applied independently to measure polychlorinated biphenyls (PCBs) in mesocosms with (1) New Bedford Harbor sediment (MA, U.S.A.), (2) sediment and biota, and (3) activated carbon amended sediment and biota. The aim was to cross validate the two different sampling approaches. Around 100 PCB congeners were quantified in the two sampling polymers, and the results confirmed the good precision of both methods and were in overall good agreement with recently published LDPE to silicone partition ratios. Further, the methods yielded C free in good agreement for all three experiments. The average ratio between C free determined by the two methods was factor 1.4 ± 0.3 (range: 0.6-2.0), and the results thus cross-validated the two sampling approaches. For future investigations, specific aims and requirements in terms of application, data treatment, and data quality requirements should dictate the selection of the most appropriate partitioning-based sampling approach.

  9. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (k eff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less

  10. Improving machine learning reproducibility in genetic association studies with proportional instance cross validation (PICV).

    PubMed

    Piette, Elizabeth R; Moore, Jason H

    2018-01-01

    Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.

  11. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  12. Semi-Empirical Validation of the Cross-Band Relative Absorption Technique for the Measurement of Molecular Mixing Ratios

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S

    2013-01-01

    Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.

  13. Numerical and experimental analysis of an in-scale masonry cross-vault prototype up to failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, Michela; Calderini, Chiara; Lagomarsino, Sergio

    2015-12-31

    A heterogeneous full 3D non-linear FE approach is validated against experimental results obtained on an in-scale masonry cross vault assembled with dry joints, and subjected to various loading conditions consisting on imposed displacement combinations to the abutments. The FE model relies into a discretization of the blocks by means of few rigid-infinitely resistant parallelepiped elements interacting by means of planar four-noded interfaces, where all the deformation (elastic and inelastic) occurs. The investigated response mechanisms of vault are the shear in-plane distortion and the longitudinal opening and closing mechanism at the abutments. After the validation of the approach on the experimentallymore » tested cross-vault, a sensitivity analysis is conducted on the same geometry, but in real scale, varying mortar joints mechanical properties, in order to furnish useful hints for safety assessment, especially in presence of seismic action.« less

  14. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  15. Exploring Mouse Protein Function via Multiple Approaches.

    PubMed

    Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.

  16. Exploring Mouse Protein Function via Multiple Approaches

    PubMed Central

    Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315

  17. Solving protein structures using short-distance cross-linking constraints as a guide for discrete molecular dynamics simulations

    PubMed Central

    Brodie, Nicholas I.; Popov, Konstantin I.; Petrotchenko, Evgeniy V.; Dokholyan, Nikolay V.; Borchers, Christoph H.

    2017-01-01

    We present an integrated experimental and computational approach for de novo protein structure determination in which short-distance cross-linking data are incorporated into rapid discrete molecular dynamics (DMD) simulations as constraints, reducing the conformational space and achieving the correct protein folding on practical time scales. We tested our approach on myoglobin and FK506 binding protein—models for α helix–rich and β sheet–rich proteins, respectively—and found that the lowest-energy structures obtained were in agreement with the crystal structure, hydrogen-deuterium exchange, surface modification, and long-distance cross-linking validation data. Our approach is readily applicable to other proteins with unknown structures. PMID:28695211

  18. Solving protein structures using short-distance cross-linking constraints as a guide for discrete molecular dynamics simulations.

    PubMed

    Brodie, Nicholas I; Popov, Konstantin I; Petrotchenko, Evgeniy V; Dokholyan, Nikolay V; Borchers, Christoph H

    2017-07-01

    We present an integrated experimental and computational approach for de novo protein structure determination in which short-distance cross-linking data are incorporated into rapid discrete molecular dynamics (DMD) simulations as constraints, reducing the conformational space and achieving the correct protein folding on practical time scales. We tested our approach on myoglobin and FK506 binding protein-models for α helix-rich and β sheet-rich proteins, respectively-and found that the lowest-energy structures obtained were in agreement with the crystal structure, hydrogen-deuterium exchange, surface modification, and long-distance cross-linking validation data. Our approach is readily applicable to other proteins with unknown structures.

  19. Alternative methods to evaluate trial level surrogacy.

    PubMed

    Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert

    2008-01-01

    The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.

  20. Raman fiber-optical method for colon cancer detection: Cross-validation and outlier identification approach

    NASA Astrophysics Data System (ADS)

    Petersen, D.; Naveed, P.; Ragheb, A.; Niedieker, D.; El-Mashtoly, S. F.; Brechmann, T.; Kötting, C.; Schmiegel, W. H.; Freier, E.; Pox, C.; Gerwert, K.

    2017-06-01

    Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples.

  1. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  2. Different Diagnosis, Shared Vulnerabilities: The Value of Cross Disorder Validation of Capacity to Consent.

    PubMed

    Rosen, Allyson; Weitlauf, Julie C

    2015-01-01

    A screening measure of capacity to consent can provide an efficient method of determining the appropriateness of including individuals from vulnerable patient populations in research, particularly in circumstances in which no caregiver is available to provide surrogate consent. Seaman et al. (2015) cross-validate a measure of capacity to consent to research developed by Jeste et al. (2007). They provide data on controls, caregivers, and patients with mild cognitive impairment and dementia. The study demonstrates the importance of validating measures across disorders with different domains of incapacity, as well as the need for timely and appropriate follow-up with potential participants who yield positive screens. Ultimately clinical measures need to adapt to the dimensional diagnostic approaches put forward in DSM 5. Integrative models of constructs, such as capacity to consent, will make this process more efficient by avoiding the need to test measures in each disorder. Until then, cross-validation studies, such as the work by Seaman et al. (2015) are critical.

  3. College and Career Readiness Assessment: Validation of the Key Cognitive Strategies Framework

    ERIC Educational Resources Information Center

    Lombardi, Allison R.; Conley, David T.; Seburn, Mary A.; Downs, Andrew M.

    2013-01-01

    In this study, the authors examined the psychometric properties of the key cognitive strategies (KCS) within the CollegeCareerReady[TM] School Diagnostic, a self-report measure of critical thinking skills intended for high school students. Using a cross-validation approach, an exploratory factor analysis was conducted with a randomly selected…

  4. A Machine Learning Framework for Plan Payment Risk Adjustment.

    PubMed

    Rose, Sherri

    2016-12-01

    To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

  5. A unified approach to validation, reliability, and education study design for surgical technical skills training.

    PubMed

    Sweet, Robert M; Hananel, David; Lawrenz, Frances

    2010-02-01

    To present modern educational psychology theory and apply these concepts to validity and reliability of surgical skills training and assessment. In a series of cross-disciplinary meetings, we applied a unified approach of behavioral science principles and theory to medical technical skills education given the recent advances in the theories in the field of behavioral psychology and statistics. While validation of the individual simulation tools is important, it is only one piece of a multimodal curriculum that in and of itself deserves examination and study. We propose concurrent validation throughout the design of simulation-based curriculum rather than once it is complete. We embrace the concept that validity and curriculum development are interdependent, ongoing processes that are never truly complete. Individual predictive, construct, content, and face validity aspects should not be considered separately but as interdependent and complementary toward an end application. Such an approach could help guide our acceptance and appropriate application of these exciting new training and assessment tools for technical skills training in medicine.

  6. A comparison of Rasch item-fit and Cronbach's alpha item reduction analysis for the development of a Quality of Life scale for children and adolescents.

    PubMed

    Erhart, M; Hagquist, C; Auquier, P; Rajmil, L; Power, M; Ravens-Sieberer, U

    2010-07-01

    This study compares item reduction analysis based on classical test theory (maximizing Cronbach's alpha - approach A), with analysis based on the Rasch Partial Credit Model item-fit (approach B), as applied to children and adolescents' health-related quality of life (HRQoL) items. The reliability and structural, cross-cultural and known-group validity of the measures were examined. Within the European KIDSCREEN project, 3019 children and adolescents (8-18 years) from seven European countries answered 19 HRQoL items of the Physical Well-being dimension of a preliminary KIDSCREEN instrument. The Cronbach's alpha and corrected item total correlation (approach A) were compared with infit mean squares and the Q-index item-fit derived according to a partial credit model (approach B). Cross-cultural differential item functioning (DIF ordinal logistic regression approach), structural validity (confirmatory factor analysis and residual correlation) and relative validity (RV) for socio-demographic and health-related factors were calculated for approaches (A) and (B). Approach (A) led to the retention of 13 items, compared with 11 items with approach (B). The item overlap was 69% for (A) and 78% for (B). The correlation coefficient of the summated ratings was 0.93. The Cronbach's alpha was similar for both versions [0.86 (A); 0.85 (B)]. Both approaches selected some items that are not strictly unidimensional and items displaying DIF. RV ratios favoured (A) with regard to socio-demographic aspects. Approach (B) was superior in RV with regard to health-related aspects. Both types of item reduction analysis should be accompanied by additional analyses. Neither of the two approaches was universally superior with regard to cultural, structural and known-group validity. However, the results support the usability of the Rasch method for developing new HRQoL measures for children and adolescents.

  7. Student-Centered Reliability, Concurrent Validity and Instructional Sensitivity in Scoring of Students' Concept Maps in a University Science Laboratory

    ERIC Educational Resources Information Center

    Kaya, Osman Nafiz; Kilic, Ziya

    2004-01-01

    Student-centered approach of scoring the concept maps consisted of three elements namely symbol system, individual portfolio and scoring scheme. We scored student-constructed concept maps based on 5 concept map criteria: validity of concepts, adequacy of propositions, significance of cross-links, relevancy of examples, and interconnectedness. With…

  8. Validating the Chinese Version of the Inventory of School Motivation

    ERIC Educational Resources Information Center

    King, Ronnel B.; Watkins, David A.

    2013-01-01

    The aim of this study is to assess the cross-cultural applicability of the Chinese version of the Inventory of School Motivation (ISM; McInerney & Sinclair, 1991) in the Hong Kong context using both within-network and between-network approaches to construct validation. The ISM measures four types of achievement goals: mastery, performance,…

  9. A Cross-Grade Study Validating the Evolutionary Pathway of Student Mental Models in Electric Circuits

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2017-01-01

    Cross-grade studies are valuable for the development of sequential curriculum. However such studies are time and resource intensive and fail to provide a clear representation to integrate different levels of representational complexity. Lin (Lin, 2006; Lin & Chiu, 2006; Lin, Chiu, & Hsu, 2006) proposed a cladistics approach in conceptual…

  10. Genome-based prediction of test cross performance in two subsequent breeding cycles.

    PubMed

    Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias

    2012-12-01

    Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.

  11. Sensitivity Study for Sensor Optical and Electric Crosstalk Based on Spectral Measurements: An Application to Developmental Sensors Using Heritage Sensors Such As MODIS

    NASA Technical Reports Server (NTRS)

    Butler, James J.; Oudrari, Hassan; Xiong, Sanxiong; Che, Nianzeng; Xiong, Xiaoxiong

    2007-01-01

    The process of developing new sensors for space flight frequently builds upon the designs and experience of existing heritage space flight sensors. Frequently in the development and testing of new sensors, problems are encountered that pose the risk of serious impact on successful retrieval of geophysical products. This paper describes an approach to assess the importance of optical and electronic cross-talk on retrieval of geophysical products using new MODIS-like sensors through the use of MODIS data sets. These approaches may be extended to any sensor characteristic and any sensor where that characteristic may impact the Level 1 products so long as validated geophysical products are being developed from the heritage sensor. In this study, a set of electronic and/or optical cross-talk coefficients are postulated. These coefficients are sender-receiver influence coefficients and represent a sensor signal contamination on any detector on a focal plane when another band's detectors on that focal plane are stimulated with a monochromatic light. The approach involves using the postulated cross-talk coefficients on an actual set of MODIS data granules. The original MODIS data granules and the cross-talk impacted granules are used with validated geophysical algorithms to create the derived products. Comparison of the products produced with the original and cross-talk impacted granules indicates potential problems, if any, with the characteristics of the developmental sensor that are being studied.

  12. Validation of the Chinese Version of the Life Orientation Test with a Robust Weighted Least Squares Approach

    ERIC Educational Resources Information Center

    Li, Cheng-Hsien

    2012-01-01

    Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least…

  13. Real-time sensor data validation

    NASA Technical Reports Server (NTRS)

    Bickmore, Timothy W.

    1994-01-01

    This report describes the status of an on-going effort to develop software capable of detecting sensor failures on rocket engines in real time. This software could be used in a rocket engine controller to prevent the erroneous shutdown of an engine due to sensor failures which would otherwise be interpreted as engine failures by the control software. The approach taken combines analytical redundancy with Bayesian belief networks to provide a solution which has well defined real-time characteristics and well-defined error rates. Analytical redundancy is a technique in which a sensor's value is predicted by using values from other sensors and known or empirically derived mathematical relations. A set of sensors and a set of relations among them form a network of cross-checks which can be used to periodically validate all of the sensors in the network. Bayesian belief networks provide a method of determining if each of the sensors in the network is valid, given the results of the cross-checks. This approach has been successfully demonstrated on the Technology Test Bed Engine at the NASA Marshall Space Flight Center. Current efforts are focused on extending the system to provide a validation capability for 100 sensors on the Space Shuttle Main Engine.

  14. A Machine Learning and Cross-Validation Approach for the Discrimination of Vegetation Physiognomic Types Using Satellite Based Multispectral and Multitemporal Data.

    PubMed

    Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake

    2017-01-01

    This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.

  15. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  16. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias

    2015-01-01

    We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do.

  17. Single-station monitoring of volcanoes using seismic ambient noise

    NASA Astrophysics Data System (ADS)

    De Plaen, Raphael S. M.; Lecocq, Thomas; Caudron, Corentin; Ferrazzini, Valérie; Francis, Olivier

    2016-08-01

    Seismic ambient noise cross correlation is increasingly used to monitor volcanic activity. However, this method is usually limited to volcanoes equipped with large and dense networks of broadband stations. The single-station approach may provide a powerful and reliable alternative to the classical "cross-station" approach when measuring variation of seismic velocities. We implemented it on the Piton de la Fournaise in Reunion Island, a very active volcano with a remarkable multidisciplinary continuous monitoring. Over the past decade, this volcano has been increasingly studied using the traditional cross-correlation technique and therefore represents a unique laboratory to validate our approach. Our results, tested on stations located up to 3.5 km from the eruptive site, performed as well as the classical approach to detect the volcanic eruption in the 1-2 Hz frequency band. This opens new perspectives to successfully forecast volcanic activity at volcanoes equipped with a single three-component seismometer.

  18. Validation of Cross Sections for Monte Carlo Simulation of the Photoelectric Effect

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Kim, Han Sung; Pia, Maria Grazia; Basaglia, Tullio; Batič, Matej; Hoff, Gabriela; Kim, Chan Hyeong; Saracco, Paolo

    2016-04-01

    Several total and partial photoionization cross section calculations, based on both theoretical and empirical approaches, are quantitatively evaluated with statistical analyses using a large collection of experimental data retrieved from the literature to identify the state of the art for modeling the photoelectric effect in Monte Carlo particle transport. Some of the examined cross section models are available in general purpose Monte Carlo systems, while others have been implemented and subjected to validation tests for the first time to estimate whether they could improve the accuracy of particle transport codes. The validation process identifies Scofield's 1973 non-relativistic calculations, tabulated in the Evaluated Photon Data Library (EPDL), as the one best reproducing experimental measurements of total cross sections. Specialized total cross section models, some of which derive from more recent calculations, do not provide significant improvements. Scofield's non-relativistic calculations are not surpassed regarding the compatibility with experiment of K and L shell photoionization cross sections either, although in a few test cases Ebel's parameterization produces more accurate results close to absorption edges. Modifications to Biggs and Lighthill's parameterization implemented in Geant4 significantly reduce the accuracy of total cross sections at low energies with respect to its original formulation. The scarcity of suitable experimental data hinders a similar extensive analysis for the simulation of the photoelectron angular distribution, which is limited to a qualitative appraisal.

  19. High-grading bias: subtle problems with assessing power of selected subsets of loci for population assignment.

    PubMed

    Waples, Robin S

    2010-07-01

    Recognition of the importance of cross-validation ('any technique or instance of assessing how the results of a statistical analysis will generalize to an independent dataset'; Wiktionary, en.wiktionary.org) is one reason that the U.S. Securities and Exchange Commission requires all investment products to carry some variation of the disclaimer, 'Past performance is no guarantee of future results.' Even a cursory examination of financial behaviour, however, demonstrates that this warning is regularly ignored, even by those who understand what an independent dataset is. In the natural sciences, an analogue to predicting future returns for an investment strategy is predicting power of a particular algorithm to perform with new data. Once again, the key to developing an unbiased assessment of future performance is through testing with independent data--that is, data that were in no way involved in developing the method in the first place. A 'gold-standard' approach to cross-validation is to divide the data into two parts, one used to develop the algorithm, the other used to test its performance. Because this approach substantially reduces the sample size that can be used in constructing the algorithm, researchers often try other variations of cross-validation to accomplish the same ends. As illustrated by Anderson in this issue of Molecular Ecology Resources, however, not all attempts at cross-validation produce the desired result. Anderson used simulated data to evaluate performance of several software programs designed to identify subsets of loci that can be effective for assigning individuals to population of origin based on multilocus genetic data. Such programs are likely to become increasingly popular as researchers seek ways to streamline routine analyses by focusing on small sets of loci that contain most of the desired signal. Anderson found that although some of the programs made an attempt at cross-validation, all failed to meet the 'gold standard' of using truly independent data and therefore produced overly optimistic assessments of power of the selected set of loci--a phenomenon known as 'high grading bias.'

  20. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    NASA Astrophysics Data System (ADS)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.

  1. Isokinetic knee strength qualities as predictors of jumping performance in high-level volleyball athletes: multiple regression approach.

    PubMed

    Sattler, Tine; Sekulic, Damir; Spasic, Miodrag; Osmankac, Nedzad; Vicente João, Paulo; Dervisevic, Edvin; Hadzic, Vedran

    2016-01-01

    Previous investigations noted potential importance of isokinetic strength in rapid muscular performances, such as jumping. This study aimed to identify the influence of isokinetic-knee-strength on specific jumping performance in volleyball. The secondary aim of the study was to evaluate reliability and validity of the two volleyball-specific jumping tests. The sample comprised 67 female (21.96±3.79 years; 68.26±8.52 kg; 174.43±6.85 cm) and 99 male (23.62±5.27 years; 84.83±10.37 kg; 189.01±7.21 cm) high- volleyball players who competed in 1st and 2nd National Division. Subjects were randomly divided into validation (N.=55 and 33 for males and females, respectively) and cross-validation subsamples (N.=54 and 34 for males and females, respectively). Set of predictors included isokinetic tests, to evaluate the eccentric and concentric strength capacities of the knee extensors, and flexors for dominant and non-dominant leg. The main outcome measure for the isokinetic testing was peak torque (PT) which was later normalized for body mass and expressed as PT/Kg. Block-jump and spike-jump performances were measured over three trials, and observed as criteria. Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between and t-test differences between observed and predicted scores; and Bland Altman graphics. Jumping tests were found to be reliable (spike jump: ICC of 0.79 and 0.86; block-jump: ICC of 0.86 and 0.90; for males and females, respectively), and their validity was confirmed by significant t-test differences between 1st vs. 2nd division players. Isokinetic variables were found to be significant predictors of jumping performance in females, but not among males. In females, the isokinetic-knee measures were shown to be stronger and more valid predictors of the block-jump (42% and 64% of the explained variance for validation and cross-validation subsample, respectively) than that of the spike-jump (39% and 34% of the explained variance for validation and cross-validation subsample, respectively). Differences between prediction models calculated for males and females are mostly explained by gender-specific biomechanics of jumping. Study defined importance of knee-isokinetic-strength in volleyball jumping performance in female athletes. Further studies should evaluate association between ankle-isokinetic-strength and volleyball-specific jumping performances. Results reinforce the need for the cross-validation of the prediction-models in sport and exercise sciences.

  2. Quantitative determination of additive Chlorantraniliprole in Abamectin preparation: Investigation of bootstrapping soft shrinkage approach by mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng

    2018-02-01

    A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.

  3. Development of the Brazilian Portuguese version of the Achilles Tendon Total Rupture Score (ATRS BrP): a cross-cultural adaptation with reliability and construct validity evaluation.

    PubMed

    Zambelli, Roberto; Pinto, Rafael Z; Magalhães, João Murilo Brandão; Lopes, Fernando Araujo Silva; Castilho, Rodrigo Simões; Baumfeld, Daniel; Dos Santos, Thiago Ribeiro Teles; Maffulli, Nicola

    2016-01-01

    There is a need for a patient-relevant instrument to evaluate outcome after treatment in patients with a total Achilles tendon rupture. The purpose of this study was to undertake a cross-cultural adaptation of the Achilles Tendon Total Rupture Score (ATRS) into Brazilian Portuguese, determining the test-retest reliability and construct validity of the instrument. A five-step approach was used in the cross-cultural adaptation process: initial translation (two bilingual Brazilian translators), synthesis of translation, back-translation (two native English language translators), consensus version and evaluation (expert committee), and testing phase. A total of 46 patients were recruited to evaluate the test-retest reproducibility and construct validity of the Brazilian Portuguese version of the ATRS. Test-retest reproducibility was performed by assessing each participant on two separate occasions. The construct validity was determined by the correlation index between the ATRS and the Orthopedic American Foot and Ankle Society (AOFAS) questionnaires. The final version of the Brazilian Portuguese ATRS had the same number of questions as the original ATRS. For the reliability analysis, an ICC(2,1) of 0.93 (95 % CI: 0.88 to 0.96) with SEM of 1.56 points and MDC of 4.32 was observed, indicating excellent reliability. The construct validity showed excellent correlation with R = 0.76 (95 % CI: 0.52 to 0.89, P < 0.001). The ATRS was successfully cross-culturally validated into Brazilian Portuguese. This version was a reliable and valid measure of function in patients who suffered complete rupture of the Achilles Tendon.

  4. Cognitive-Motor Interference in an Ecologically Valid Street Crossing Scenario.

    PubMed

    Janouch, Christin; Drescher, Uwe; Wechsler, Konstantin; Haeger, Mathias; Bock, Otmar; Voelcker-Rehage, Claudia

    2018-01-01

    Laboratory-based research revealed that gait involves higher cognitive processes, leading to performance impairments when executed with a concurrent loading task. Deficits are especially pronounced in older adults. Theoretical approaches like the multiple resource model highlight the role of task similarity and associated attention distribution problems. It has been shown that in cases where these distribution problems are perceived relevant to participant's risk of falls, older adults prioritize gait and posture over the concurrent loading task. Here we investigate whether findings on task similarity and task prioritization can be transferred to an ecologically valid scenario. Sixty-three younger adults (20-30 years of age) and 61 older adults (65-75 years of age) participated in a virtual street crossing simulation. The participants' task was to identify suitable gaps that would allow them to cross a simulated two way street safely. Therefore, participants walked on a manual treadmill that transferred their forward motion to forward displacements in a virtual city. The task was presented as a single task (crossing only) and as a multitask. In the multitask condition participants were asked, among others, to type in three digit numbers that were presented either visually or auditorily. We found that for both age groups, street crossing as well as typing performance suffered under multitasking conditions. Impairments were especially pronounced for older adults (e.g., longer crossing initiation phase, more missed opportunities). However, younger and older adults did not differ in the speed and success rate of crossing. Further, deficits were stronger in the visual compared to the auditory task modality for most parameters. Our findings conform to earlier studies that found an age-related decline in multitasking performance in less realistic scenarios. However, task similarity effects were inconsistent and question the validity of the multiple resource model within ecologically valid scenarios.

  5. International Harmonization and Cooperation in the Validation of Alternative Methods.

    PubMed

    Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie

    The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.

  6. Influence of thermodynamic parameter in Lanosterol 14alpha-demethylase inhibitory activity as antifungal agents: a QSAR approach.

    PubMed

    Vasanthanathan, Poongavanam; Lakshmi, Manickavasagam; Arockia Babu, Marianesan; Kaskhedikar, Sathish Gopalrao

    2006-06-01

    A quantitative structure activity relationship, Hansch approach was applied on twenty compounds of chromene derivatives as Lanosterol 14alpha-demethylase inhibitory activity against eight fungal organisms. Various physicochemical descriptors and reported minimum inhibitory concentration values of different fungal organisms were used as independent variables and dependent variable respectively. The best models for eight different fungal organisms were first validated by leave-one-out cross validation procedure. It was revealed that thermodynamic parameters were found to have overall significant correlationship with anti fungal activity and these studies provide an insight to design new molecules.

  7. Cross-cultural equivalence of the patient- and parent-reported quality of life in short stature youth (QoLISSY) questionnaire.

    PubMed

    Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael

    2014-01-01

    Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.

  8. Measuring socioeconomic status in multicountry studies: results from the eight-country MAL-ED study

    PubMed Central

    2014-01-01

    Background There is no standardized approach to comparing socioeconomic status (SES) across multiple sites in epidemiological studies. This is particularly problematic when cross-country comparisons are of interest. We sought to develop a simple measure of SES that would perform well across diverse, resource-limited settings. Methods A cross-sectional study was conducted with 800 children aged 24 to 60 months across eight resource-limited settings. Parents were asked to respond to a household SES questionnaire, and the height of each child was measured. A statistical analysis was done in two phases. First, the best approach for selecting and weighting household assets as a proxy for wealth was identified. We compared four approaches to measuring wealth: maternal education, principal components analysis, Multidimensional Poverty Index, and a novel variable selection approach based on the use of random forests. Second, the selected wealth measure was combined with other relevant variables to form a more complete measure of household SES. We used child height-for-age Z-score (HAZ) as the outcome of interest. Results Mean age of study children was 41 months, 52% were boys, and 42% were stunted. Using cross-validation, we found that random forests yielded the lowest prediction error when selecting assets as a measure of household wealth. The final SES index included access to improved water and sanitation, eight selected assets, maternal education, and household income (the WAMI index). A 25% difference in the WAMI index was positively associated with a difference of 0.38 standard deviations in HAZ (95% CI 0.22 to 0.55). Conclusions Statistical learning methods such as random forests provide an alternative to principal components analysis in the development of SES scores. Results from this multicountry study demonstrate the validity of a simplified SES index. With further validation, this simplified index may provide a standard approach for SES adjustment across resource-limited settings. PMID:24656134

  9. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.

    PubMed

    Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T

    2013-07-02

    Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.

  10. Prediction of protein subcellular locations by GO-FunD-PseAA predictor.

    PubMed

    Chou, Kuo-Chen; Cai, Yu-Dong

    2004-08-06

    The localization of a protein in a cell is closely correlated with its biological function. With the explosion of protein sequences entering into DataBanks, it is highly desired to develop an automated method that can fast identify their subcellular location. This will expedite the annotation process, providing timely useful information for both basic research and industrial application. In view of this, a powerful predictor has been developed by hybridizing the gene ontology approach [Nat. Genet. 25 (2000) 25], functional domain composition approach [J. Biol. Chem. 277 (2002) 45765], and the pseudo-amino acid composition approach [Proteins Struct. Funct. Genet. 43 (2001) 246; Erratum: ibid. 44 (2001) 60]. As a showcase, the recently constructed dataset [Bioinformatics 19 (2003) 1656] was used for demonstration. The dataset contains 7589 proteins classified into 12 subcellular locations: chloroplast, cytoplasmic, cytoskeleton, endoplasmic reticulum, extracellular, Golgi apparatus, lysosomal, mitochondrial, nuclear, peroxisomal, plasma membrane, and vacuolar. The overall success rate of prediction obtained by the jackknife cross-validation was 92%. This is so far the highest success rate performed on this dataset by following an objective and rigorous cross-validation procedure.

  11. Existence and instability of steady states for a triangular cross-diffusion system: A computer-assisted proof

    NASA Astrophysics Data System (ADS)

    Breden, Maxime; Castelli, Roberto

    2018-05-01

    In this paper, we present and apply a computer-assisted method to study steady states of a triangular cross-diffusion system. Our approach consist in an a posteriori validation procedure, that is based on using a fixed point argument around a numerically computed solution, in the spirit of the Newton-Kantorovich theorem. It allows to prove the existence of various non homogeneous steady states for different parameter values. In some situations, we obtain as many as 13 coexisting steady states. We also apply the a posteriori validation procedure to study the linear stability of the obtained steady states, proving that many of them are in fact unstable.

  12. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  13. Targeted exploration and analysis of large cross-platform human transcriptomic compendia

    PubMed Central

    Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.

    2016-01-01

    We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801

  14. Resampling procedures to identify important SNPs using a consensus approach.

    PubMed

    Pardy, Christopher; Motyer, Allan; Wilson, Susan

    2011-11-29

    Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.

  15. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  16. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  17. Using 171,173Yb(d,p) to benchmark a surrogate reaction for neutron capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatarik, R; Bersntein, L; Burke, J

    2008-08-08

    Neutron capture cross sections on unstable nuclei are important for many applications in nuclear structure and astrophysics. Measuring these cross sections directly is a major challenge and often impossible. An indirect approach for measuring these cross sections is the surrogate reaction method, which makes it possible to relate the desired cross section to a cross section of an alternate reaction that proceeds through the same compound nucleus. To benchmark the validity of using the (d,p{gamma}) reaction as a surrogate for (n,{gamma}), the {sup 171,173}Yb(d,p{gamma}) reactions were measured with the goal to reproduce the known [1] neutron capture cross section ratiosmore » of these nuclei.« less

  18. Cross-Cultural Detection of Depression from Nonverbal Behaviour.

    PubMed

    Alghowinem, Sharifa; Goecke, Roland; Cohn, Jeffrey F; Wagner, Michael; Parker, Gordon; Breakspear, Michael

    2015-05-01

    Millions of people worldwide suffer from depression. Do commonalities exist in their nonverbal behavior that would enable cross-culturally viable screening and assessment of severity? We investigated the generalisability of an approach to detect depression severity cross-culturally using video-recorded clinical interviews from Australia, the USA and Germany. The material varied in type of interview, subtypes of depression and inclusion healthy control subjects, cultural background, and recording environment. The analysis focussed on temporal features of participants' eye gaze and head pose. Several approaches to training and testing within and between datasets were evaluated. The strongest results were found for training across all datasets and testing across datasets using leave-one-subject-out cross-validation. In contrast, generalisability was attenuated when training on only one or two of the three datasets and testing on subjects from the dataset(s) not used in training. These findings highlight the importance of using training data exhibiting the expected range of variability.

  19. Assessment of biological half life using in silico QSPkR approach: a self organizing molecular field analysis (SOMFA) on a series of antimicrobial quinolone drugs.

    PubMed

    Goel, Honey; Sinha, V R; Thareja, Suresh; Aggarwal, Saurabh; Kumar, Manoj

    2011-08-30

    The quinolones belong to a family of synthetic potent broad-spectrum antibiotics and particularly active against gram-negative organisms, especially Pseudomonas aeruginosa. A 3D-QSPkR approach has been used to obtain the quantitative structure pharmacokinetic relationship for a series of quinolone drugs using SOMFA. The series consisting of 28 molecules have been investigated for their pharmacokinetic performance using biological half life (t(1/2)). A statistically validated robust model for a diverse group of quinolone drugs having flexibility in structure and pharmacokinetic profile (t(1/2)) obtained using SOMFA having good cross-validated correlation coefficient r(cv)(2) (0.6847), non cross-validated correlation coefficient r(2) values (0.7310) and high F-test value (33.9663). Analysis of 3D-QSPkR models through electrostatic and shape grids provide useful information about the shape and electrostatic potential contributions on t(1/2). The analysis of SOMFA results provide an insight for the generation of novel molecular architecture of quinolones with optimal half life and improved biological profile. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Mind your crossings: Mining GIS imagery for crosswalk localization.

    PubMed

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M; Mascetti, Sergio

    2017-04-01

    For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives.

  1. Mind your crossings: Mining GIS imagery for crosswalk localization

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2017-01-01

    For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives. PMID:28757907

  2. IT Security Standards and Legal Metrology - Transfer and Validation

    NASA Astrophysics Data System (ADS)

    Thiel, F.; Hartmann, V.; Grottker, U.; Richter, D.

    2014-08-01

    Legal Metrology's requirements can be transferred into the IT security domain applying a generic set of standardized rules provided by the Common Criteria (ISO/IEC 15408). We will outline the transfer and cross validation of such an approach. As an example serves the integration of Legal Metrology's requirements into a recently developed Common Criteria based Protection Profile for a Smart Meter Gateway designed under the leadership of the Germany's Federal Office for Information Security. The requirements on utility meters laid down in the Measuring Instruments Directive (MID) are incorporated. A verification approach to check for meeting Legal Metrology's requirements by their interpretation through Common Criteria's generic requirements is also presented.

  3. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  4. Predicting pathway cross-talks in ankylosing spondylitis through investigating the interactions among pathways.

    PubMed

    Gu, Xiang; Liu, Cong-Jian; Wei, Jian-Jie

    2017-11-13

    Given that the pathogenesis of ankylosing spondylitis (AS) remains unclear, the aim of this study was to detect the potentially functional pathway cross-talk in AS to further reveal the pathogenesis of this disease. Using microarray profile of AS and biological pathways as study objects, Monte Carlo cross-validation method was used to identify the significant pathway cross-talks. In the process of Monte Carlo cross-validation, all steps were iterated 50 times. For each run, detection of differentially expressed genes (DEGs) between two groups was conducted. The extraction of the potential disrupted pathways enriched by DEGs was then implemented. Subsequently, we established a discriminating score (DS) for each pathway pair according to the distribution of gene expression levels. After that, we utilized random forest (RF) classification model to screen out the top 10 paired pathways with the highest area under the curve (AUCs), which was computed using 10-fold cross-validation approach. After 50 bootstrap, the best pairs of pathways were identified. According to their AUC values, the pair of pathways, antigen presentation pathway and fMLP signaling in neutrophils, achieved the best AUC value of 1.000, which indicated that this pathway cross-talk could distinguish AS patients from normal subjects. Moreover, the paired pathways of SAPK/JNK signaling and mitochondrial dysfunction were involved in 5 bootstraps. Two paired pathways (antigen presentation pathway and fMLP signaling in neutrophil, as well as SAPK/JNK signaling and mitochondrial dysfunction) can accurately distinguish AS and control samples. These paired pathways may be helpful to identify patients with AS for early intervention.

  5. Predicting Classifier Performance with Limited Training Data: Applications to Computer-Aided Diagnosis in Breast and Prostate Cancer

    PubMed Central

    Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant

    2015-01-01

    Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029

  6. Full-vectorial finite element method in a cylindrical coordinate system for loss analysis of photonic wire bends

    NASA Astrophysics Data System (ADS)

    Kakihara, Kuniaki; Kono, Naoya; Saitoh, Kunimasa; Koshiba, Masanori

    2006-11-01

    This paper presents a new full-vectorial finite-element method in a local cylindrical coordinate system, to effectively analyze bending losses in photonic wires. The discretization is performed in the cross section of a three-dimensional curved waveguide, using hybrid edge/nodal elements. The solution region is truncated by anisotropic, perfectly matched layers in the cylindrical coordinate system, to deal properly with leaky modes of the waveguide. This approach is used to evaluate bending losses in silicon wire waveguides. The numerical results of the present approach are compared with results calculated with an equivalent straight waveguide approach and with reported experimental data. These comparisons together demonstrate the validity of the present approach based on the cylindrical coordinate system and also clarifies the limited validity of the equivalent straight waveguide approximation.

  7. [Maslach Burnout Inventory - Student Survey: Portugal-Brazil cross-cultural adaptation].

    PubMed

    Campos, Juliana Alvares Duarte Bonini; Maroco, João

    2012-10-01

    To perform a cross-cultural adaptation of the Portuguese version of the Maslach Burnout Inventory for students (MBI-SS), and investigate its reliability, validity and cross-cultural invariance. The face validity involved the participation of a multidisciplinary team. Content validity was performed. The Portuguese version was completed in 2009, on the internet, by 958 Brazilian and 556 Portuguese university students from the urban area. Confirmatory factor analysis was carried out using as fit indices: the χ²/df, the Comparative Fit Index (CFI), the Goodness of Fit Index (GFI) and the Root Mean Square Error of Approximation (RMSEA). To verify the stability of the factor solution according to the original English version, cross-validation was performed in 2/3 of the total sample and replicated in the remaining 1/3. Convergent validity was estimated by the average variance extracted and composite reliability. The discriminant validity was assessed, and the internal consistency was estimated by the Cronbach's alpha coefficient. Concurrent validity was estimated by the correlational analysis of the mean scores of the Portuguese version and the Copenhagen Burnout Inventory, and the divergent validity was compared to the Beck Depression Inventory. The invariance of the model between the Brazilian and the Portuguese samples was assessed. The three-factor model of Exhaustion, Disengagement and Efficacy showed good fit (c 2/df = 8.498, CFI = 0.916, GFI = 0.902, RMSEA = 0.086). The factor structure was stable (λ:χ²dif = 11.383, p = 0.50; Cov: χ²dif = 6.479, p = 0.372; Residues: χ²dif = 21.514, p = 0.121). Adequate convergent validity (VEM = 0.45;0.64, CC = 0.82;0.88), discriminant (ρ² = 0.06;0.33) and internal consistency (α = 0.83;0.88) were observed. The concurrent validity of the Portuguese version with the Copenhagen Inventory was adequate (r = 0.21, 0.74). The assessment of the divergent validity was impaired by the approach of the theoretical concept of the dimensions Exhaustion and Disengagement of the Portuguese version with the Beck Depression Inventory. Invariance of the instrument between the Brazilian and Portuguese samples was not observed (λ:χ²dif = 84.768, p<0.001; Cov: χ²dif = 129.206, p < 0.001; Residues: χ²dif = 518.760, p < 0.001). The Portuguese version of the Maslach Burnout Inventory for students showed adequate reliability and validity, but its factor structure was not invariant between the countries, indicating the absence of cross-cultural stability.

  8. Developing Enhanced Blood–Brain Barrier Permeability Models: Integrating External Bio-Assay Data in QSAR Modeling

    PubMed Central

    Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander

    2015-01-01

    Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462

  9. Desiderata: Towards Indigenous Models of Vocational Psychology

    ERIC Educational Resources Information Center

    Leong, Frederick T. L.; Pearce, Marina

    2011-01-01

    As a result of a relative lack of cross-cultural validity in most current (Western) psychological models, indigenous models of psychology have recently become a popular approach for understanding behaviour in specific cultures. Such models would be valuable to vocational psychology research with culturally diverse populations. Problems facing…

  10. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  11. Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.

    PubMed

    Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu

    2017-09-01

    Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Cognitive—Motor Interference in an Ecologically Valid Street Crossing Scenario

    PubMed Central

    Janouch, Christin; Drescher, Uwe; Wechsler, Konstantin; Haeger, Mathias; Bock, Otmar; Voelcker-Rehage, Claudia

    2018-01-01

    Laboratory-based research revealed that gait involves higher cognitive processes, leading to performance impairments when executed with a concurrent loading task. Deficits are especially pronounced in older adults. Theoretical approaches like the multiple resource model highlight the role of task similarity and associated attention distribution problems. It has been shown that in cases where these distribution problems are perceived relevant to participant's risk of falls, older adults prioritize gait and posture over the concurrent loading task. Here we investigate whether findings on task similarity and task prioritization can be transferred to an ecologically valid scenario. Sixty-three younger adults (20–30 years of age) and 61 older adults (65–75 years of age) participated in a virtual street crossing simulation. The participants' task was to identify suitable gaps that would allow them to cross a simulated two way street safely. Therefore, participants walked on a manual treadmill that transferred their forward motion to forward displacements in a virtual city. The task was presented as a single task (crossing only) and as a multitask. In the multitask condition participants were asked, among others, to type in three digit numbers that were presented either visually or auditorily. We found that for both age groups, street crossing as well as typing performance suffered under multitasking conditions. Impairments were especially pronounced for older adults (e.g., longer crossing initiation phase, more missed opportunities). However, younger and older adults did not differ in the speed and success rate of crossing. Further, deficits were stronger in the visual compared to the auditory task modality for most parameters. Our findings conform to earlier studies that found an age-related decline in multitasking performance in less realistic scenarios. However, task similarity effects were inconsistent and question the validity of the multiple resource model within ecologically valid scenarios. PMID:29774001

  13. Classification of malignant and benign liver tumors using a radiomics approach

    NASA Astrophysics Data System (ADS)

    Starmans, Martijn P. A.; Miclea, Razvan L.; van der Voort, Sebastian R.; Niessen, Wiro J.; Thomeer, Maarten G.; Klein, Stefan

    2018-03-01

    Correct diagnosis of the liver tumor phenotype is crucial for treatment planning, especially the distinction between malignant and benign lesions. Clinical practice includes manual scoring of the tumors on Magnetic Resonance (MR) images by a radiologist. As this is challenging and subjective, it is often followed by a biopsy. In this study, we propose a radiomics approach as an objective and non-invasive alternative for distinguishing between malignant and benign phenotypes. T2-weighted (T2w) MR sequences of 119 patients from multiple centers were collected. We developed an efficient semi-automatic segmentation method, which was used by a radiologist to delineate the tumors. Within these regions, features quantifying tumor shape, intensity, texture, heterogeneity and orientation were extracted. Patient characteristics and semantic features were added for a total of 424 features. Classification was performed using Support Vector Machines (SVMs). The performance was evaluated using internal random-split cross-validation. On the training set within each iteration, feature selection and hyperparameter optimization were performed. To this end, another cross validation was performed by splitting the training sets in training and validation parts. The optimal settings were evaluated on the independent test sets. Manual scoring by a radiologist was also performed. The radiomics approach resulted in 95% confidence intervals of the AUC of [0.75, 0.92], specificity [0.76, 0.96] and sensitivity [0.52, 0.82]. These approach the performance of the radiologist, which were an AUC of 0.93, specificity 0.70 and sensitivity 0.93. Hence, radiomics has the potential to predict the liver tumor benignity in an objective and non-invasive manner.

  14. Predictive Validity of an Empirical Approach for Selecting Promising Message Topics: A Randomized-Controlled Study

    PubMed Central

    Lee, Stella Juhyun; Brennan, Emily; Gibson, Laura Anne; Tan, Andy S. L.; Kybert-Momjian, Ani; Liu, Jiaying; Hornik, Robert

    2016-01-01

    Several message topic selection approaches propose that messages based on beliefs pretested and found to be more strongly associated with intentions will be more effective in changing population intentions and behaviors when used in a campaign. This study aimed to validate the underlying causal assumption of these approaches which rely on cross-sectional belief–intention associations. We experimentally tested whether messages addressing promising themes as identified by the above criterion were more persuasive than messages addressing less promising themes. Contrary to expectations, all messages increased intentions. Interestingly, mediation analyses showed that while messages deemed promising affected intentions through changes in targeted promising beliefs, messages deemed less promising also achieved persuasion by influencing nontargeted promising beliefs. Implications for message topic selection are discussed. PMID:27867218

  15. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS.

    PubMed

    Regenbogen, Sam; Wilkins, Angela D; Lichtarge, Olivier

    2016-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses.

  16. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS

    PubMed Central

    REGENBOGEN, SAM; WILKINS, ANGELA D.; LICHTARGE, OLIVIER

    2015-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses. PMID:26776170

  17. On-Demand Associative Cross-Language Information Retrieval

    NASA Astrophysics Data System (ADS)

    Geraldo, André Pinto; Moreira, Viviane P.; Gonçalves, Marcos A.

    This paper proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyse market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using queries in two distinct languages: Portuguese and Finnish to retrieve documents in English. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. The combination between machine translation and our approach yielded the best results, even outperforming the monolingual baseline.

  18. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.

  19. An elevated plus-maze in mixed reality for studying human anxiety-related behavior.

    PubMed

    Biedermann, Sarah V; Biedermann, Daniel G; Wenzlaff, Frederike; Kurjak, Tim; Nouri, Sawis; Auer, Matthias K; Wiedemann, Klaus; Briken, Peer; Haaker, Jan; Lonsdorf, Tina B; Fuss, Johannes

    2017-12-21

    A dearth of laboratory tests to study actual human approach-avoidance behavior has complicated translational research on anxiety. The elevated plus-maze (EPM) is the gold standard to assess approach-avoidance behavior in rodents. Here, we translated the EPM to humans using mixed reality through a combination of virtual and real-world elements. In two validation studies, we observed participants' anxiety on a behavioral, physiological, and subjective level. Participants reported higher anxiety on open arms, avoided open arms, and showed an activation of endogenous stress systems. Participants' with high anxiety exhibited higher avoidance. Moreover, open arm avoidance was moderately predicted by participants' acrophobia and sensation seeking, with opposing influences. In a randomized, double blind, placebo controlled experiment, GABAergic stimulation decreased avoidance of open arms while alpha-2-adrenergic antagonism increased avoidance. These findings demonstrate cross-species validity of open arm avoidance as a translational measure of anxiety. We thus introduce the first ecologically valid assay to track actual human approach-avoidance behavior under laboratory conditions.

  20. Cross Validation of Two Partitioning-Based Sampling Approaches in Mesocosms Containing PCB Contaminated Field Sediment, Biota, and Activated Carbon Amendment

    EPA Science Inventory

    The Gold Standard for determining freely dissolved concentrations (Cfree) of hydrophobic organic compounds in sediment interstitial water would be in situ deployment combined with equilibrium sampling, which is generally difficult to achieve. In the present study, ex situ equilib...

  1. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  2. Assessing the Approaches to Learning of Nigerian Students.

    ERIC Educational Resources Information Center

    Watkins, David; Akande, Adebowale

    1992-01-01

    A study investigated the reliability and validity of the Study Process Questionnaire (Biggs) for 352 undergraduates in a Nigerian university. Although it was found that the concepts involved were relevant to this population and scales and subscales had adequate internal consistency, cross-cultural comparison of scores was problematic. (Author/MSE)

  3. Approaches to Validate and Manipulate RNA Targets with Small Molecules in Cells.

    PubMed

    Childs-Disney, Jessica L; Disney, Matthew D

    2016-01-01

    RNA has become an increasingly important target for therapeutic interventions and for chemical probes that dissect and manipulate its cellular function. Emerging targets include human RNAs that have been shown to directly cause cancer, metabolic disorders, and genetic disease. In this review, we describe various routes to obtain bioactive compounds that target RNA, with a particular emphasis on the development of small molecules. We use these cases to describe approaches that are being developed for target validation, which include target-directed cleavage, classic pull-down experiments, and covalent cross-linking. Thus, tools are available to design small molecules to target RNA and to identify the cellular RNAs that are their targets.

  4. NPOESS Preparatory Project Validation Program for the Cross-track Infrared Sounder

    NASA Astrophysics Data System (ADS)

    Barnet, C.; Gu, D.; Nalli, N. R.

    2009-12-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Program, in partnership with National Aeronautical Space Administration (NASA), will launch the NPOESS Preparatory Project (NPP), a risk reduction and data continuity mission, prior to the first operational NPOESS launch. The NPOESS Program, in partnership with Northrop Grumman Aerospace Systems, will execute the NPP Calibration and Validation (Cal/Val) program to ensure the data products comply with the requirements of the sponsoring agencies. The Cross-track Infrared Sounder (CrIS) and the Advanced Technology Microwave Sounder (ATMS) are two of the instruments that make up the suite of sensors on NPP. Together, CrIS and ATMS will produce three Environmental Data Records (EDRs) including the Atmospheric Vertical Temperature Profile (AVTP), Atmospheric Vertical Moisture Profile (AVMP), and the Atmospheric Vertical Pressure Profile (AVPP). The AVTP and the AVMP are both NPOESS Key Performance Parameters (KPPs). The validation plans establish science and user community leadership and participation, and demonstrated, cost-effective Cal/Val approaches. This presentation will provide an overview of the collaborative data, techniques, and schedule for the validation of the NPP CrIS and ATMS environmental data products.

  5. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  6. Post-partum depression in Kinshasa, Democratic Republic of Congo: validation of a concept using a mixed-methods cross-cultural approach.

    PubMed

    Bass, Judith K; Ryder, Robert W; Lammers, Marie-Christine; Mukaba, Thibaut N; Bolton, Paul A

    2008-12-01

    To determine if a post-partum depression syndrome exists among mothers in Kinshasa, Democratic Republic of Congo, by adapting and validating standard screening instruments. Using qualitative interviewing techniques, we interviewed a convenience sample of 80 women living in a large peri-urban community to better understand local conceptions of mental illness. We used this information to adapt two standard depression screeners, the Edinburgh Post-partum Depression Scale and the Hopkins Symptom Checklist. In a subsequent quantitative study, we identified another 133 women with and without the local depression syndrome and used this information to validate the adapted screening instruments. Based on the qualitative data, we found a local syndrome that closely approximates the Western model of major depressive disorder. The women we interviewed, representative of the local populace, considered this an important syndrome among new mothers because it negatively affects women and their young children. Women (n = 41) identified as suffering from this syndrome had statistically significantly higher depression severity scores on both adapted screeners than women identified as not having this syndrome (n = 20; P < 0.0001). When it is unclear or unknown if Western models of psychopathology are appropriate for use in the local context, these models must be validated to ensure cross-cultural applicability. Using a mixed-methods approach we found a local syndrome similar to depression and validated instruments to screen for this disorder. As the importance of compromised mental health in developing world populations becomes recognized, the methods described in this report will be useful more widely.

  7. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Cross-cultural adaptation and validation of the teamwork climate scale

    PubMed Central

    Silva, Mariana Charantola; Peduzzi, Marina; Sangaleti, Carine Teles; da Silva, Dirceu; Agreli, Heloise Fernandes; West, Michael A; Anderson, Neil R

    2016-01-01

    ABSTRACT OBJECTIVE To adapt and validate the Team Climate Inventory scale, of teamwork climate measurement, for the Portuguese language, in the context of primary health care in Brazil. METHODS Methodological study with quantitative approach of cross-cultural adaptation (translation, back-translation, synthesis, expert committee, and pretest) and validation with 497 employees from 72 teams of the Family Health Strategy in the city of Campinas, SP, Southeastern Brazil. We verified reliability by the Cronbach’s alpha, construct validity by the confirmatory factor analysis with SmartPLS software, and correlation by the job satisfaction scale. RESULTS We problematized the overlap of items 9, 11, and 12 of the “participation in the team” factor and the “team goals” factor regarding its definition. The validation showed no overlapping of items and the reliability ranged from 0.92 to 0.93. The confirmatory factor analysis indicated suitability of the proposed model with distribution of the 38 items in the four factors. The correlation between teamwork climate and job satisfaction was significant. CONCLUSIONS The version of the scale in Brazilian Portuguese was validated and can be used in the context of primary health care in the Country, constituting an adequate tool for the assessment and diagnosis of teamwork. PMID:27556966

  9. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  10. Solid-state NMR characterization of cross-linking in EPDM/PP blends from 1H-13C polarization transfer dynamics.

    PubMed

    Aluas, Mihaela; Filip, Claudiu

    2005-05-01

    A novel approach for solid-state NMR characterization of cross-linking in polymer blends from the analysis of (1)H-(13)C polarization transfer dynamics is introduced. It extends the model of residual dipolar couplings under permanent cross-linking, typically used to describe (1)H transverse relaxation techniques, by considering a more realistic distribution of the order parameter along a polymer chain in rubbers. Based on a systematic numerical analysis, the extended model was shown to accurately reproduce all the characteristic features of the cross-polarization curves measured on such materials. This is particularly important for investigating blends of great technological potential, like thermoplastic elastomers, where (13)C high-resolution techniques, such as CP-MAS, are indispensable to selectively investigate structural and dynamical properties of the desired component. The validity of the new approach was demonstrated using the example of the CP build-up curves measured on a well resolved EPDM resonance line in a series of EPDM/PP blends.

  11. Concurrent Validity of the International Family Quality of Life Survey.

    PubMed

    Samuel, Preethy S; Pociask, Fredrick D; DiZazzo-Miller, Rosanne; Carrellas, Ann; LeRoy, Barbara W

    2016-01-01

    The measurement of the social construct of Family Quality of Life (FQOL) is a parsimonious alternative to the current approach of measuring familial outcomes using a battery of tools related to individual-level outcomes. The purpose of this study was to examine the internal consistency and concurrent validity of the International FQOL Survey (FQOLS-2006), using cross-sectional data collected from 65 family caregivers of children with developmental disabilities. It shows a moderate correlation between the total FQOL scores of the FQOLS-2006 and the Beach Center's FQOL scale. The validity of five FQOLS-2006 domains was supported by the correlations between conceptually related domains.

  12. Crossing trend analysis methodology and application for Turkish rainfall records

    NASA Astrophysics Data System (ADS)

    Şen, Zekâi

    2018-01-01

    Trend analyses are the necessary tools for depicting possible general increase or decrease in a given time series. There are many versions of trend identification methodologies such as the Mann-Kendall trend test, Spearman's tau, Sen's slope, regression line, and Şen's innovative trend analysis. The literature has many papers about the use, cons and pros, and comparisons of these methodologies. In this paper, a completely new approach is proposed based on the crossing properties of a time series. It is suggested that the suitable trend from the centroid of the given time series should have the maximum number of crossings (total number of up-crossings or down-crossings). This approach is applicable whether the time series has dependent or independent structure and also without any dependence on the type of the probability distribution function. The validity of this method is presented through extensive Monte Carlo simulation technique and its comparison with other existing trend identification methodologies. The application of the methodology is presented for a set of annual daily extreme rainfall time series from different parts of Turkey and they have physically independent structure.

  13. Crossing Fibers Detection with an Analytical High Order Tensor Decomposition

    PubMed Central

    Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.

    2014-01-01

    Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940

  14. Scaling Cross Sections for Ion-atom Impact Ionization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igor D. Kaganovich; Edward Startsev; Ronald C. Davidson

    2003-06-06

    The values of ion-atom ionization cross sections are frequently needed for many applications that utilize the propagation of fast ions through matter. When experimental data and theoretical calculations are not available, approximate formulas are frequently used. This paper briefly summarizes the most important theoretical results and approaches to cross section calculations in order to place the discussion in historical perspective and offer a concise introduction to the topic. Based on experimental data and theoretical predictions, a new fit for ionization cross sections is proposed. The range of validity and accuracy of several frequently used approximations (classical trajectory, the Born approximation,more » and so forth) are discussed using, as examples, the ionization cross sections of hydrogen and helium atoms by various fully stripped ions.« less

  15. Development and Validation of the Behavioral Tendencies Questionnaire

    PubMed Central

    Van Dam, Nicholas T.; Brown, Anna; Mole, Tom B.; Davis, Jake H.; Britton, Willoughby B.; Brewer, Judson A.

    2015-01-01

    At a fundamental level, taxonomy of behavior and behavioral tendencies can be described in terms of approach, avoid, or equivocate (i.e., neither approach nor avoid). While there are numerous theories of personality, temperament, and character, few seem to take advantage of parsimonious taxonomy. The present study sought to implement this taxonomy by creating a questionnaire based on a categorization of behavioral temperaments/tendencies first identified in Buddhist accounts over fifteen hundred years ago. Items were developed using historical and contemporary texts of the behavioral temperaments, described as “Greedy/Faithful”, “Aversive/Discerning”, and “Deluded/Speculative”. To both maintain this categorical typology and benefit from the advantageous properties of forced-choice response format (e.g., reduction of response biases), binary pairwise preferences for items were modeled using Latent Class Analysis (LCA). One sample (n1 = 394) was used to estimate the item parameters, and the second sample (n2 = 504) was used to classify the participants using the established parameters and cross-validate the classification against multiple other measures. The cross-validated measure exhibited good nomothetic span (construct-consistent relationships with related measures) that seemed to corroborate the ideas present in the original Buddhist source documents. The final 13-block questionnaire created from the best performing items (the Behavioral Tendencies Questionnaire or BTQ) is a psychometrically valid questionnaire that is historically consistent, based in behavioral tendencies, and promises practical and clinical utility particularly in settings that teach and study meditation practices such as Mindfulness Based Stress Reduction (MBSR). PMID:26535904

  16. Development and Validation of the Behavioral Tendencies Questionnaire.

    PubMed

    Van Dam, Nicholas T; Brown, Anna; Mole, Tom B; Davis, Jake H; Britton, Willoughby B; Brewer, Judson A

    2015-01-01

    At a fundamental level, taxonomy of behavior and behavioral tendencies can be described in terms of approach, avoid, or equivocate (i.e., neither approach nor avoid). While there are numerous theories of personality, temperament, and character, few seem to take advantage of parsimonious taxonomy. The present study sought to implement this taxonomy by creating a questionnaire based on a categorization of behavioral temperaments/tendencies first identified in Buddhist accounts over fifteen hundred years ago. Items were developed using historical and contemporary texts of the behavioral temperaments, described as "Greedy/Faithful", "Aversive/Discerning", and "Deluded/Speculative". To both maintain this categorical typology and benefit from the advantageous properties of forced-choice response format (e.g., reduction of response biases), binary pairwise preferences for items were modeled using Latent Class Analysis (LCA). One sample (n1 = 394) was used to estimate the item parameters, and the second sample (n2 = 504) was used to classify the participants using the established parameters and cross-validate the classification against multiple other measures. The cross-validated measure exhibited good nomothetic span (construct-consistent relationships with related measures) that seemed to corroborate the ideas present in the original Buddhist source documents. The final 13-block questionnaire created from the best performing items (the Behavioral Tendencies Questionnaire or BTQ) is a psychometrically valid questionnaire that is historically consistent, based in behavioral tendencies, and promises practical and clinical utility particularly in settings that teach and study meditation practices such as Mindfulness Based Stress Reduction (MBSR).

  17. Cross-cultural validation of the German and Turkish versions of the PHQ-9: an IRT approach.

    PubMed

    Reich, Hanna; Rief, Winfried; Brähler, Elmar; Mewes, Ricarda

    2018-06-05

    The Patient Health Questionnaire's depression module (PHQ-9) is a widely used screening tool to assess depressive disorders. However, cross-linguistic and cross-cultural validation of the PHQ-9 is mostly lacking. This study investigates whether scores on the German and Turkish versions of the PHQ-9 are comparable. Data from Germans without a migration background (German version, n = 1670) and Turkish immigrants in Germany (either German or Turkish version, n = 307) were used. Differential Item Functioning (DIF) was assessed using Item Response Theory (IRT) models. Several items of the PHQ-9 were found to exhibit DIF related to language or ethnicity, e.g. 'sleep problems', 'appetite changes' and 'anhedonia'. However, PHQ-9 sum scores were found to be unbiased, i.e., DIF had no notable impact on scale levels. PHQ-9 sum scores can be compared between Turkish immigrants and Germans without a migration background without any adjustments, regardless of whether they complete the German or the Turkish version.

  18. Analytical method development of nifedipine and its degradants binary mixture using high performance liquid chromatography through a quality by design approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.

    2018-03-01

    Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.

  19. An adaptive deep learning approach for PPG-based identification.

    PubMed

    Jindal, V; Birjandtalab, J; Pouyan, M Baran; Nourani, M

    2016-08-01

    Wearable biosensors have become increasingly popular in healthcare due to their capabilities for low cost and long term biosignal monitoring. This paper presents a novel two-stage technique to offer biometric identification using these biosensors through Deep Belief Networks and Restricted Boltzman Machines. Our identification approach improves robustness in current monitoring procedures within clinical, e-health and fitness environments using Photoplethysmography (PPG) signals through deep learning classification models. The approach is tested on TROIKA dataset using 10-fold cross validation and achieved an accuracy of 96.1%.

  20. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Isotopic Composition Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radulescu, Georgeta; Gauld, Ian C; Ilas, Germina

    2011-01-01

    The expanded use of burnup credit in the United States (U.S.) for storage and transport casks, particularly in the acceptance of credit for fission products, has been constrained by the availability of experimental fission product data to support code validation. The U.S. Nuclear Regulatory Commission (NRC) staff has noted that the rationale for restricting the Interim Staff Guidance on burnup credit for storage and transportation casks (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address themore » issues of burnup credit criticality validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the isotopic composition (depletion) validation approach and resulting observations and recommendations. Validation of the criticality calculations is addressed in a companion paper at this conference. For isotopic composition validation, the approach is to determine burnup-dependent bias and uncertainty in the effective neutron multiplication factor (keff) due to bias and uncertainty in isotopic predictions, via comparisons of isotopic composition predictions (calculated) and measured isotopic compositions from destructive radiochemical assay utilizing as much assay data as is available, and a best-estimate Monte Carlo based method. This paper (1) provides a detailed description of the burnup credit isotopic validation approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias and uncertainty results based on a quality-assurance-controlled prerelease version of the Scale 6.1 code package and the ENDF/B-VII nuclear cross section data.« less

  1. Model-Based Collaborative Filtering Analysis of Student Response Data: Machine-Learning Item Response Theory

    ERIC Educational Resources Information Center

    Bergner, Yoav; Droschler, Stefan; Kortemeyer, Gerd; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.

    2012-01-01

    We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a…

  2. Bifactor Approach to Modeling Multidimensionality of Physical Self-Perception Profile

    ERIC Educational Resources Information Center

    Chung, ChihMing; Liao, Xiaolan; Song, Hairong; Lee, Taehun

    2016-01-01

    The multi-dimensionality of Physical Self-Perception Profile (PSPP) has been acknowledged by the use of correlated-factor model and second-order model. In this study, the authors critically endorse the bifactor model, as a substitute to address the multi-dimensionality of PSPP. To cross-validate the models, analyses are conducted first in…

  3. Modeling and prediction of peptide drift times in ion mobility spectrometry using sequence-based and structure-based approaches.

    PubMed

    Zhang, Yiming; Jin, Quan; Wang, Shuting; Ren, Ren

    2011-05-01

    The mobile behavior of 1481 peptides in ion mobility spectrometry (IMS), which are generated by protease digestion of the Drosophila melanogaster proteome, is modeled and predicted based on two different types of characterization methods, i.e. sequence-based approach and structure-based approach. In this procedure, the sequence-based approach considers both the amino acid composition of a peptide and the local environment profile of each amino acid in the peptide; the structure-based approach is performed with the CODESSA protocol, which regards a peptide as a common organic compound and generates more than 200 statistically significant variables to characterize the whole structure profile of a peptide molecule. Subsequently, the nonlinear support vector machine (SVM) and Gaussian process (GP) as well as linear partial least squares (PLS) regression is employed to correlate the structural parameters of the characterizations with the IMS drift times of these peptides. The obtained quantitative structure-spectrum relationship (QSSR) models are evaluated rigorously and investigated systematically via both one-deep and two-deep cross-validations as well as the rigorous Monte Carlo cross-validation (MCCV). We also give a comprehensive comparison on the resulting statistics arising from the different combinations of variable types with modeling methods and find that the sequence-based approach can give the QSSR models with better fitting ability and predictive power but worse interpretability than the structure-based approach. In addition, though the QSSR modeling using sequence-based approach is not needed for the preparation of the minimization structures of peptides before the modeling, it would be considerably efficient as compared to that using structure-based approach. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses--Criticality (keff) Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaglione, John M; Mueller, Don; Wagner, John C

    2011-01-01

    One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less

  5. An empirical assessment of validation practices for molecular classifiers

    PubMed Central

    Castaldi, Peter J.; Dahabreh, Issa J.

    2011-01-01

    Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697

  6. QSPR models for half-wave reduction potential of steroids: a comparative study between feature selection and feature extraction from subsets of or entire set of descriptors.

    PubMed

    Hemmateenejad, Bahram; Yazdani, Mahdieh

    2009-02-16

    Steroids are widely distributed in nature and are found in plants, animals, and fungi in abundance. A data set consists of a diverse set of steroids have been used to develop quantitative structure-electrochemistry relationship (QSER) models for their half-wave reduction potential. Modeling was established by means of multiple linear regression (MLR) and principle component regression (PCR) analyses. In MLR analysis, the QSPR models were constructed by first grouping descriptors and then stepwise selection of variables from each group (MLR1) and stepwise selection of predictor variables from the pool of all calculated descriptors (MLR2). Similar procedure was used in PCR analysis so that the principal components (or features) were extracted from different group of descriptors (PCR1) and from entire set of descriptors (PCR2). The resulted models were evaluated using cross-validation, chance correlation, application to prediction reduction potential of some test samples and accessing applicability domain. Both MLR approaches represented accurate results however the QSPR model found by MLR1 was statistically more significant. PCR1 approach produced a model as accurate as MLR approaches whereas less accurate results were obtained by PCR2 approach. In overall, the correlation coefficients of cross-validation and prediction of the QSPR models resulted from MLR1, MLR2 and PCR1 approaches were higher than 90%, which show the high ability of the models to predict reduction potential of the studied steroids.

  7. [Cross-cultural validated adaptation of dysfunctional voiding symptom score (DVSS) to Japanese language and cognitive linguistics in questionnaire for pediatric patients].

    PubMed

    Imamura, Masaaki; Usui, Tomoko; Johnin, Kazuyoshi; Yoshimura, Koji; Farhat, Walid; Kanematsu, Akihiro; Ogawa, Osamu

    2014-07-01

    Validated questionnaire for evaluation of pediatric lower urinary tract symptoms (LUTS) is of a great need. We performed cross-cultural validated adaptation of Dysfunctional Voiding Symptom Score (DVSS) to Japanese language, and assessed whether children understand and respond to questionnaire correctly, using cognitive linguistic approach. We translated DVSS into two Japanese versions according to a standard validation methodology: translation, synthesis, back-translation, expert review, and pre-testing. One version was written in adult language for parents, and the other was written in child language for children. Pre-testing was done with 5 to 15-year-old patients visiting us, having normal intelligence. A specialist in cognitive linguistics observed the response by children and parents to DVSS as an interviewer. When a child could not understand a question without adding or paraphrasing the question by the parents, it was defined as 'misidentification'. We performed pretesting with 2 trial versions of DVSS before having the final version. The pre-testing for the first trial version was done for 32 patients (male to female ratio was 19 : 13). The pre-testing for the second trial version was done for 11 patients (male to female ratio was 8 : 3). In DVSS in child language, misidentification was consistently observed for representation of time or frequency. We completed the formal validated translation by amending the problems raised in the pre-testing. The cross-cultural validated adaptation of DVSS to child and adult Japanese was completed. Since temporal perception is not fully developed in children, caution should be taken for using the terms related with time or frequency in the questionnaires for children.

  8. Validation of bioelectrical impedance analysis for total body water assessment against the deuterium dilution technique in Asian children.

    PubMed

    Liu, A; Byrne, N M; Ma, G; Nasreddine, L; Trinidad, T P; Kijboonchoo, K; Ismail, M N; Kagawa, M; Poh, B K; Hills, A P

    2011-12-01

    To develop and cross-validate bioelectrical impedance analysis (BIA) prediction equations of total body water (TBW) and fat-free mass (FFM) for Asian pre-pubertal children from China, Lebanon, Malaysia, Philippines and Thailand. Height, weight, age, gender, resistance and reactance measured by BIA were collected from 948 Asian children (492 boys and 456 girls) aged 8-10 years from the five countries. The deuterium dilution technique was used as the criterion method for the estimation of TBW and FFM. The BIA equations were developed using stepwise multiple regression analysis and cross-validated using the Bland-Altman approach. The BIA prediction equation for the estimation of TBW was as follows: TBW=0.231 × height(2)/resistance+0.066 × height+0.188 × weight+0.128 × age+0.500 × sex-0.316 × Thais-4.574 (R (2)=88.0%, root mean square error (RMSE)=1.3 kg), and for the estimation of FFM was as follows: FFM=0.299 × height(2)/resistance+0.086 × height+0.245 × weight+0.260 × age+0.901 × sex-0.415 × ethnicity (Thai ethnicity =1, others = 0)-6.952 (R (2)=88.3%, RMSE=1.7 kg). No significant difference between measured and predicted values for the whole cross-validation sample was found. However, the prediction equation for estimation of TBW/FFM tended to overestimate TBW/FFM at lower levels whereas underestimate at higher levels of TBW/FFM. Accuracy of the general equation for TBW and FFM was also valid at each body mass index category. Ethnicity influences the relationship between BIA and body composition in Asian pre-pubertal children. The newly developed BIA prediction equations are valid for use in Asian pre-pubertal children.

  9. Validation of Biomarkers for the Early Detection of Colorectal Adenocarcinoma (GLNE 010) — EDRN Public Portal

    Cancer.gov

    We propose a Phase 2 (large cross-sectional) PRoBE-compliant validation trial of stool-based and serum-based tests for the detection of colorectal neoplasia (1). The trial is powered to detect early stage colorectal adenocarcinoma or high grade dysplasia. This is the most stringent, conservative approach to the early diagnosis of colonic neoplasia and addresses the most important endpoint of identifying individuals with curable, early stage cancer and those with very high risk non-invasive neoplasia (high grade dysplasia).

  10. Accuracy of genomic selection models in a large population of open-pollinated families in white spruce

    PubMed Central

    Beaulieu, J; Doerksen, T; Clément, S; MacKay, J; Bousquet, J

    2014-01-01

    Genomic selection (GS) is of interest in breeding because of its potential for predicting the genetic value of individuals and increasing genetic gains per unit of time. To date, very few studies have reported empirical results of GS potential in the context of large population sizes and long breeding cycles such as for boreal trees. In this study, we assessed the effectiveness of marker-aided selection in an undomesticated white spruce (Picea glauca (Moench) Voss) population of large effective size using a GS approach. A discovery population of 1694 trees representative of 214 open-pollinated families from 43 natural populations was phenotyped for 12 wood and growth traits and genotyped for 6385 single-nucleotide polymorphisms (SNPs) mined in 2660 gene sequences. GS models were built to predict estimated breeding values using all the available SNPs or SNP subsets of the largest absolute effects, and they were validated using various cross-validation schemes. The accuracy of genomic estimated breeding values (GEBVs) varied from 0.327 to 0.435 when the training and the validation data sets shared half-sibs that were on average 90% of the accuracies achieved through traditionally estimated breeding values. The trend was also the same for validation across sites. As expected, the accuracy of GEBVs obtained after cross-validation with individuals of unknown relatedness was lower with about half of the accuracy achieved when half-sibs were present. We showed that with the marker densities used in the current study, predictions with low to moderate accuracy could be obtained within a large undomesticated population of related individuals, potentially resulting in larger gains per unit of time with GS than with the traditional approach. PMID:24781808

  11. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  12. Comparison of geometric morphometric outline methods in the discrimination of age-related differences in feather shape

    PubMed Central

    Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R

    2006-01-01

    Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414

  13. Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2016-01-01

    In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080

  14. Functional Inference of Complex Anatomical Tendinous Networks at a Macroscopic Scale via Sparse Experimentation

    PubMed Central

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J.

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16th century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines. PMID:23144601

  15. Functional inference of complex anatomical tendinous networks at a macroscopic scale via sparse experimentation.

    PubMed

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines.

  16. The Self-esteem Stability Scale (SESS) for Cross-Sectional Direct Assessment of Self-esteem Stability

    PubMed Central

    Altmann, Tobias; Roth, Marcus

    2018-01-01

    Self-esteem stability describes fluctuations in the level of self-esteem experienced by individuals over a brief period of time. In recent decades, self-esteem stability has repeatedly been shown to be an important variable affecting psychological functioning. However, measures of self-esteem stability are few and lacking in validity. In this paper, we present the Self-Esteem Stability Scale (SESS), a unidimensional and very brief scale to directly assess self-esteem stability. In four studies (total N = 826), we describe the development of the SESS and present evidence for its validity with respect to individual outcomes (life satisfaction, neuroticism, and vulnerable narcissism) and dyadic outcomes (relationship satisfaction in self- and partner ratings) through direct comparisons with existing measures. The new SESS proved to be a stronger predictor than the existing scales and had incremental validity over and above self-esteem level. The results also showed that all cross-sectional measures of self-esteem stability were only moderately associated with variability in self-esteem levels assessed longitudinally with multiple administrations of the Rosenberg Self-Esteem Scale. We discuss this validity issue, arguing that direct and indirect assessment approaches measure relevant, yet different aspects of self-esteem stability. PMID:29487551

  17. The Self-esteem Stability Scale (SESS) for Cross-Sectional Direct Assessment of Self-esteem Stability.

    PubMed

    Altmann, Tobias; Roth, Marcus

    2018-01-01

    Self-esteem stability describes fluctuations in the level of self-esteem experienced by individuals over a brief period of time. In recent decades, self-esteem stability has repeatedly been shown to be an important variable affecting psychological functioning. However, measures of self-esteem stability are few and lacking in validity. In this paper, we present the Self-Esteem Stability Scale (SESS), a unidimensional and very brief scale to directly assess self-esteem stability. In four studies (total N = 826), we describe the development of the SESS and present evidence for its validity with respect to individual outcomes (life satisfaction, neuroticism, and vulnerable narcissism) and dyadic outcomes (relationship satisfaction in self- and partner ratings) through direct comparisons with existing measures. The new SESS proved to be a stronger predictor than the existing scales and had incremental validity over and above self-esteem level. The results also showed that all cross-sectional measures of self-esteem stability were only moderately associated with variability in self-esteem levels assessed longitudinally with multiple administrations of the Rosenberg Self-Esteem Scale. We discuss this validity issue, arguing that direct and indirect assessment approaches measure relevant, yet different aspects of self-esteem stability.

  18. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  19. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  20. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  1. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  2. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  3. Application of Monte Carlo cross-validation to identify pathway cross-talk in neonatal sepsis.

    PubMed

    Zhang, Yuxia; Liu, Cui; Wang, Jingna; Li, Xingxia

    2018-03-01

    To explore genetic pathway cross-talk in neonates with sepsis, an integrated approach was used in this paper. To explore the potential relationships between differently expressed genes between normal uninfected neonates and neonates with sepsis and pathways, genetic profiling and biologic signaling pathway were first integrated. For different pathways, the score was obtained based upon the genetic expression by quantitatively analyzing the pathway cross-talk. The paired pathways with high cross-talk were identified by random forest classification. The purpose of the work was to find the best pairs of pathways able to discriminate sepsis samples versus normal samples. The results found 10 pairs of pathways, which were probably able to discriminate neonates with sepsis versus normal uninfected neonates. Among them, the best two paired pathways were identified according to analysis of extensive literature. Impact statement To find the best pairs of pathways able to discriminate sepsis samples versus normal samples, an RF classifier, the DS obtained by DEGs of paired pathways significantly associated, and Monte Carlo cross-validation were applied in this paper. Ten pairs of pathways were probably able to discriminate neonates with sepsis versus normal uninfected neonates. Among them, the best two paired pathways ((7) IL-6 Signaling and Phospholipase C Signaling (PLC); (8) Glucocorticoid Receptor (GR) Signaling and Dendritic Cell Maturation) were identified according to analysis of extensive literature.

  4. Complex Problem Solving in Educational Contexts--Something beyond "g": Concept, Assessment, Measurement Invariance, and Construct Validity

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno

    2013-01-01

    Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…

  5. A bayesian cross-validation approach to evaluate genetic baselines and forecast the necessary number of informative single nucleotide polymorphisms

    USDA-ARS?s Scientific Manuscript database

    Mixed stock analysis (MSA) is a powerful tool used in the management and conservation of numerous species. Its function is to estimate the sources of contributions in a mixture of populations of a species, as well as to estimate the probabilities that individuals originated at a source. Considerable...

  6. Validating Culture and Gender-Specific Constructs: A Mixed-Method Approach to Advance Assessment Procedures in Cross-Cultural Settings

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Sarkar, Sreeroopa; Nastasi, Bonnie; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka

    2006-01-01

    Despite on-going calls for developing cultural competency among mental health practitioners, few assessment instruments consider cultural variation in psychological constructs. To meet the challenge of developing measures for minority and international students, it is necessary to account for the influence culture may have on the latent constructs…

  7. On the validation of seismic imaging methods: Finite frequency or ray theory?

    DOE PAGES

    Maceira, Monica; Larmat, Carene; Porritt, Robert W.; ...

    2015-01-23

    We investigate the merits of the more recently developed finite-frequency approach to tomography against the more traditional and approximate ray theoretical approach for state of the art seismic models developed for western North America. To this end, we employ the spectral element method to assess the agreement between observations on real data and measurements made on synthetic seismograms predicted by the models under consideration. We check for phase delay agreement as well as waveform cross-correlation values. Based on statistical analyses on S wave phase delay measurements, finite frequency shows an improvement over ray theory. Random sampling using cross-correlation values identifiesmore » regions where synthetic seismograms computed with ray theory and finite-frequency models differ the most. Our study suggests that finite-frequency approaches to seismic imaging exhibit measurable improvement for pronounced low-velocity anomalies such as mantle plumes.« less

  8. Assessing self-regulation strategies: development and validation of the tempest self-regulation questionnaire for eating (TESQ-E) in adolescents.

    PubMed

    De Vet, Emely; De Ridder, Denise; Stok, Marijn; Brunso, Karen; Baban, Adriana; Gaspar, Tania

    2014-09-02

    Applying self-regulation strategies have proven important in eating behaviors, but it remains subject to investigation what strategies adolescents report to use to ensure healthy eating, and adequate measures are lacking. Therefore, we developed and validated a self-regulation questionnaire applied to eating (TESQ-E) for adolescents. Study 1 reports a four-step approach to develop the TESQ-E questionnaire (n = 1097). Study 2 was a cross-sectional survey among adolescents from nine European countries (n = 11,392) that assessed the TESQ-E, eating-related behaviors, dietary intake and background characteristics. In study 3, the TESQ-E was administered twice within four weeks to evaluate test-retest reliability (n = 140). Study 4 was a cross-sectional survey (n = 93) that assessed the TESQ-E and related psychological constructs (e.g., motivation, autonomy, self-control). All participants were aged between 10 and 17 years. Study 1 resulted in a 24-item questionnaire assessing adolescent-reported use of six specific strategies for healthy eating that represent three general self-regulation approaches. Study 2 showed that the easy-to-administer theory-based TESQ-E has a clear factor structure and good subscale reliabilities. The questionnaire was related to eating-related behaviors and dietary intake, indicating predictive validity. Study 3 showed good test-retest reliabilities for the TESQ-E. Study 4 indicated that TESQ-E was related to but also distinguishable from general self-regulation and motivation measures. The TESQ-E provides a reliable and valid measure to assess six theory-based self-regulation strategies that adolescents may use to ensure their healthy eating.

  9. [Diagnostic validity of attention deficit/hyperactivity disorder: from phenomenology to neurobiology (I)].

    PubMed

    Trujillo-Orrego, N; Pineda, D A; Uribe, L H

    2012-03-01

    The diagnostic criteria for the attentional deficit hyperactivity disorder (ADHD), were defined by the American Psychiatric Association in the Diagnostic and Statistical Manual of Mental Disorders fourth version (DSM-IV) and World Health Organization in the ICD-10. The American Psychiatric Association used an internal validity analysis to select specific behavioral symptoms associated with the disorder and to build five cross-cultural criteria for its use in the categorical diagnosis. The DSM has been utilized for clinicians and researchers as a valid and stable approach since 1968. We did a systematic review of scientific literature in Spanish and English, aimed to identify the historical origin that supports ADHD as a psychiatric construct. This comprehensive review started exploring the concept of minimal brain dysfunction, hyper-activity, inattention, impulsivity since 1932 to 2011. This paper summarize all the DSM versions that include the definition of ADHD or its equivalent, and it point out the statistical and methodological approach implemented for defining ADHD as a valid epidemiological and psychometric construct. Finally the paper discusses some considerations and suggestions for the new versions of the manual.

  10. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Coupled-Sturmian and perturbative treatments of electron transfer and ionization in high-energy p-He+ collisions

    NASA Astrophysics Data System (ADS)

    Winter, Thomas G.; Alston, Steven G.

    1992-02-01

    Cross sections have been determined for electron transfer and ionization in collisions between protons and He+ ions at proton energies from several hundred kilo-electron-volts to 2 MeV. A coupled-Sturmian approach is taken, extending the work of Winter [Phys. Rev. A 35, 3799 (1987)] and Stodden et al. [Phys. Rev. A 41, 1281 (1990)] to high energies where perturbative approaches are expected to be valid. An explicit connection is made with the first-order Born approximation for ionization and the impulse version of the distorted, strong-potential Born approximation for electron transfer. The capture cross section is shown to be affected by the presence of target basis functions of positive energy near v2/2, corresponding to the Thomas mechanism.

  12. Cross-validation analysis for genetic evaluation models for ranking in endurance horses.

    PubMed

    García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I

    2018-01-01

    Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.

  13. Development process of an assessment tool for disruptive behavior problems in cross-cultural settings: the Disruptive Behavior International Scale – Nepal version (DBIS-N)

    PubMed Central

    Burkey, Matthew D.; Ghimire, Lajina; Adhikari, Ramesh P.; Kohrt, Brandon A.; Jordans, Mark J. D.; Haroz, Emily; Wissow, Lawrence

    2017-01-01

    Systematic processes are needed to develop valid measurement instruments for disruptive behavior disorders (DBDs) in cross-cultural settings. We employed a four-step process in Nepal to identify and select items for a culturally valid assessment instrument: 1) We extracted items from validated scales and local free-list interviews. 2) Parents, teachers, and peers (n=30) rated the perceived relevance and importance of behavior problems. 3) Highly rated items were piloted with children (n=60) in Nepal. 4) We evaluated internal consistency of the final scale. We identified 49 symptoms from 11 scales, and 39 behavior problems from free-list interviews (n=72). After dropping items for low ratings of relevance and severity and for poor item-test correlation, low frequency, and/or poor acceptability in pilot testing, 16 items remained for the Disruptive Behavior International Scale—Nepali version (DBIS-N). The final scale had good internal consistency (α=0.86). A 4-step systematic approach to scale development including local participation yielded an internally consistent scale that included culturally relevant behavior problems. PMID:28093575

  14. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  15. Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2011-01-01

    Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969

  16. Information flow between interacting human brains: Identification, validation, and relationship to social expertise.

    PubMed

    Bilek, Edda; Ruf, Matthias; Schäfer, Axel; Akdeniz, Ceren; Calhoun, Vince D; Schmahl, Christian; Demanuele, Charmaine; Tost, Heike; Kirsch, Peter; Meyer-Lindenberg, Andreas

    2015-04-21

    Social interactions are fundamental for human behavior, but the quantification of their neural underpinnings remains challenging. Here, we used hyperscanning functional MRI (fMRI) to study information flow between brains of human dyads during real-time social interaction in a joint attention paradigm. In a hardware setup enabling immersive audiovisual interaction of subjects in linked fMRI scanners, we characterize cross-brain connectivity components that are unique to interacting individuals, identifying information flow between the sender's and receiver's temporoparietal junction. We replicate these findings in an independent sample and validate our methods by demonstrating that cross-brain connectivity relates to a key real-world measure of social behavior. Together, our findings support a central role of human-specific cortical areas in the brain dynamics of dyadic interactions and provide an approach for the noninvasive examination of the neural basis of healthy and disturbed human social behavior with minimal a priori assumptions.

  17. Measuring Cross-Cultural Supernatural Beliefs with Self- and Peer-Reports.

    PubMed

    Bluemke, Matthias; Jong, Jonathan; Grevenstein, Dennis; Mikloušić, Igor; Halberstadt, Jamin

    2016-01-01

    Despite claims about the universality of religious belief, whether religiosity scales have the same meaning when administered inter-subjectively-or translated and applied cross-culturally-is currently unknown. Using the recent "Supernatural Belief Scale" (SBS), we present a primer on how to verify the strong assumptions of measurement invariance required in research on religion. A comparison of two independent samples, Croatians and New Zealanders, showed that, despite a sophisticated psychometric model, measurement invariance could be demonstrated for the SBS except for two noninvariant intercepts. We present a new approach for inspecting measurement invariance across self- and peer-reports as two dependent samples. Although supernatural beliefs may be hard to observe in others, the measurement model was fully invariant for Croatians and their nominated peers. The results not only establish, for the first time, a valid measure of religious supernatural belief across two groups of different language and culture, but also demonstrate a general invariance test for distinguishable dyad members nested within the same targets. More effort needs to be made to design and validate cross-culturally applicable measures of religiosity.

  18. Measuring Cross-Cultural Supernatural Beliefs with Self- and Peer-Reports

    PubMed Central

    Bluemke, Matthias; Jong, Jonathan; Grevenstein, Dennis; Mikloušić, Igor; Halberstadt, Jamin

    2016-01-01

    Despite claims about the universality of religious belief, whether religiosity scales have the same meaning when administered inter-subjectively–or translated and applied cross-culturally–is currently unknown. Using the recent “Supernatural Belief Scale” (SBS), we present a primer on how to verify the strong assumptions of measurement invariance required in research on religion. A comparison of two independent samples, Croatians and New Zealanders, showed that, despite a sophisticated psychometric model, measurement invariance could be demonstrated for the SBS except for two noninvariant intercepts. We present a new approach for inspecting measurement invariance across self- and peer-reports as two dependent samples. Although supernatural beliefs may be hard to observe in others, the measurement model was fully invariant for Croatians and their nominated peers. The results not only establish, for the first time, a valid measure of religious supernatural belief across two groups of different language and culture, but also demonstrate a general invariance test for distinguishable dyad members nested within the same targets. More effort needs to be made to design and validate cross-culturally applicable measures of religiosity. PMID:27760206

  19. Training Valence, Instrumentality, and Expectancy Scale (T-VIES-it): Factor Structure and Nomological Network in an Italian Sample

    ERIC Educational Resources Information Center

    Zaniboni, Sara; Fraccaroli, Franco; Truxillo, Donald M.; Bertolino, Marilena; Bauer, Talya N.

    2011-01-01

    Purpose: The purpose of this study is to validate, in an Italian sample, a multidimensional training motivation measure (T-VIES-it) based on expectancy (VIE) theory, and to examine the nomological network surrounding the construct. Design/methodology/approach: Using a cross-sectional design study, 258 public sector employees in Northeast Italy…

  20. Endangered Butterflies as a Model System for Managing Source Sink Dynamics on Department of Defense Lands

    DTIC Science & Technology

    patches to cycle from sink to source status and back.Objective: Through a combination of field studies and state-of-the-art quantitative models, we...landscapes with dynamic changes in habitat quality due to management. We also validated our general approach by comparing patterns in our focal species to general, cross-taxa, patterns.

  1. Mapping the World - a New Approach for Volunteered Geographic Information in the Cloud

    NASA Astrophysics Data System (ADS)

    Moeller, M. S.; Furhmann, S.

    2015-05-01

    The OSM project provides a geodata basis for the entire world under the CC-SA licence agreement. But some parts of the world are mapped more densely compared to other regions. However, many less developed countries show a lack of valid geo-information. Africa for example is a sparsely mapped continent. During a huge Ebola outbreak in 2014 the lack of data became apparent. Help organization like the American Red Cross and the Humanitarian Openstreetmap Team organized mappings campaign to fill the gaps with valid OSM geodata. This paper gives a short introduction into this mapping activity.

  2. Modelling of hyperconcentrated flood and channel evolution in a braided reach using a dynamically coupled one-dimensional approach

    NASA Astrophysics Data System (ADS)

    Xia, Junqiang; Zhang, Xiaolei; Wang, Zenghui; Li, Jie; Zhou, Meirong

    2018-06-01

    Hyperconcentrated sediment-laden floods often occur in a braided reach of the Lower Yellow River, usually leading to significant channel evolution. A one-dimensional (1D) morphodynamic model using a dynamically coupled solution approach is developed to simulate hyperconcentrated flood and channel evolution in the braided reach with an extremely irregular cross-sectional geometry. In the model, the improved equations for hydrodynamics account for the effects of sediment concentration and bed evolution, which are coupled with the equations of non-equilibrium sediment transport and bed evolution. The model was validated using measurements from the 1977 and 2004 hyperconcentrated floods. Furthermore, the effects were investigated of different cross-sectional spacings and allocation modes of channel deformation area on the model results. It was found that a suitable cross-sectional distance of less than 3 km should be adopted when simulating hyperconcentrated floods, and the results using the uniform allocation mode can agree better with measurements than other two allocation modes.

  3. NNvPDB: Neural Network based Protein Secondary Structure Prediction with PDB Validation.

    PubMed

    Sakthivel, Seethalakshmi; S K M, Habeeb

    2015-01-01

    The predicted secondary structural states are not cross validated by any of the existing servers. Hence, information on the level of accuracy for every sequence is not reported by the existing servers. This was overcome by NNvPDB, which not only reported greater Q3 but also validates every prediction with the homologous PDB entries. NNvPDB is based on the concept of Neural Network, with a new and different approach of training the network every time with five PDB structures that are similar to query sequence. The average accuracy for helix is 76%, beta sheet is 71% and overall (helix, sheet and coil) is 66%. http://bit.srmuniv.ac.in/cgi-bin/bit/cfpdb/nnsecstruct.pl.

  4. Cross-cultural adaptation of the German version of the spinal stenosis measure.

    PubMed

    Wertli, Maria M; Steurer, Johann; Wildi, Lukas M; Held, Ulrike

    2014-06-01

    To validate the German version of the spinal stenosis measure (SSM), a disease-specific questionnaire assessing symptom severity, physical function, and satisfaction with treatment in patients with lumbar spinal stenosis. After translation, cross-cultural adaptation, and pilot testing, we assessed internal consistency, test-retest reliability, construct validity, and responsiveness of the SSM subscales. Data from a large Swiss multi-center prospective cohort study were used. Reference scales for the assessment of construct validity and responsiveness were the numeric rating scale, pain thermometer, and the Roland Morris Disability Questionnaire. One hundred and eight consecutive patients were included in this validation study, recruited from five different centers. Cronbach's alpha was above 0.8 for all three subscales of the SSM. The objectivity of the SSM was assessed using a partial credit approach. The model showed a good global fit to the data. Of the 108 patients 78 participated in the test-retest procedure. The ICC values were above 0.8 for all three subscales of the SSM. Correlations with reference scales were above 0.7 for the symptom and function subscales. For satisfaction subscale, it was 0.66 or above. Clinically meaningful changes of the reference scales over time were associated with significantly more improvement in all three SSM subscales (p < 0.001). Conclusion: The proposed version of the SSM showed very good measurement properties and can be considered validated for use in the German language.

  5. Rational selection of training and test sets for the development of validated QSAR models

    NASA Astrophysics Data System (ADS)

    Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander

    2003-02-01

    Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.

  6. Maternal sensitivity and infant attachment security in Korea: cross-cultural validation of the Strange Situation.

    PubMed

    Jin, Mi Kyoung; Jacobvitz, Deborah; Hazen, Nancy; Jung, Sung Hoon

    2012-01-01

    The present study sought to analyze infant and maternal behavior both during the Strange Situation Procedure (SSP) and a free play session in a Korean sample (N = 87) to help understand whether mother-infant attachment relationships are universal or culture-specific. Distributions of attachment classifications in the Korean sample were compared with a cross-national sample. Behavior of mothers and infants following the two separation episodes in the SSP, including mothers' proximity to their infants and infants' approach to the caregiver, was also observed, as was the association between maternal sensitivity observed during free play session and infant security. The percentage of Korean infants classified as secure versus insecure mirrored the global distribution, however, only one Korean baby was classified as avoidant. Following the separation episodes in the Strange Situation, Korean mothers were more likely than mothers in Ainsworth's Baltimore sample to approach their babies immediately and sit beside them throughout the reunion episodes, even when their babies were no longer distressed. Also, Korean babies less often approached their mothers during reunions than did infants in the Baltimore sample. Finally, the link between maternal sensitivity and infant security was significant. The findings support the idea that the basic secure base function of attachment is universal and the SSP is a valid measure of secure attachment, but cultural differences in caregiving may result in variations in how this function is manifested.

  7. Proof of Concept: A review on how network and systems biology approaches aid in the discovery of potent anticancer drug combinations

    PubMed Central

    Azmi, Asfar S.; Wang, Zhiwei; Philip, Philip A.; Mohammad, Ramzi M.; Sarkar, Fazlul H.

    2010-01-01

    Cancer therapies that target key molecules have not fulfilled expected promises for most common malignancies. Major challenges include the incomplete understanding and validation of these targets in patients, the multiplicity and complexity of genetic and epigenetic changes in the majority of cancers, and the redundancies and cross-talk found in key signaling pathways. Collectively, the uses of single-pathway targeted approaches are not effective therapies for human malignances. To overcome these barriers, it is important to understand the molecular cross-talk among key signaling pathways and how they may be altered by targeted agents. This requires innovative approaches such as understanding the global physiological environment of target proteins and the effects of modifying them without losing key molecular details. Such strategies will aid the design of novel therapeutics and their combinations against multifaceted diseases where efficacious combination therapies will focus on altering multiple pathways rather than single proteins. Integrated network modeling and systems biology has emerged as a powerful tool benefiting our understanding of drug mechanism of action in real time. This mini-review highlights the significance of the network and systems biology-based strategy and presents a “proof-of-concept” recently validated in our laboratory using the example of a combination treatment of oxaliplatin and the MDM2 inhibitor MI-219 in genetically complex and incurable pancreatic adenocarcinoma. PMID:21041384

  8. Validation of Cross Sections with Criticality Experiment and Reaction Rates: the Neptunium Case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Berthier, B.; Le Naour, C.; Stéphan, C.; Paradela, C.; Tarrío, D.; Duran, I.

    2014-04-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurements the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of the n_TOF data, we considered a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by uranium highly enriched in 235U so as to approach criticality with fast neutrons. The multiplication factor keff of the calculation is in better agreement with the experiment when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explored the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. The large modification needed to reduce the deviation seems to be incompatible with existing inelastic cross section measurements. Also we show that the νbar of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  9. Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.

    PubMed

    Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J

    2018-01-01

    The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.

  10. Reimagining psychoses: an agnostic approach to diagnosis.

    PubMed

    Keshavan, Matcheri S; Clementz, Brett A; Pearlson, Godfrey D; Sweeney, John A; Tamminga, Carol A

    2013-05-01

    Current approaches to defining and classifying psychotic disorders are compromised by substantive heterogeneity within, blurred boundaries between, as well as overlaps across the various disorders in outcome, treatment response, emerging evidence regarding pathophysiology and presumed etiology. We herein review the evolution, current status and the constraints posed by classic symptom-based diagnostic approaches. We compare the continuing constructs that underlie the current classification of psychoses, and contrast those to evolving new thinking in other areas of medicine. An important limitation in current psychiatric nosology may stem from the fact that symptom-based diagnoses do not "carve nature at its joints"; while symptom-based classifications have improved our reliability, they may lack validity. Next steps in developing a more valid scientific nosology for psychoses include a) agnostic deconstruction of disease dimensions, identifying disease markers and endophenotypes; b) mapping such markers across translational domains from behaviors to molecules, c) reclustering cross-cutting bio-behavioral data using modern phenotypic and biometric approaches, and finally d) validating such entities using etio-pathology, outcome and treatment-response measures. The proposed steps of deconstruction and "bottom-up" disease definition, as elsewhere in medicine, may well provide a better foundation for developing a nosology for psychotic disorders that may have better utility in predicting outcome, treatment response and etiology, and identifying novel treatment approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  12. Ionic transport in high-energy-density matter

    DOE PAGES

    Stanton, Liam G.; Murillo, Michael S.

    2016-04-08

    Ionic transport coefficients for dense plasmas have been numerically computed using an effective Boltzmann approach. Here, we developed a simplified effective potential approach that yields accurate fits for all of the relevant cross sections and collision integrals. These results have been validated with molecular-dynamics simulations for self-diffusion, interdiffusion, viscosity, and thermal conductivity. Molecular dynamics has also been used to examine the underlying assumptions of the Boltzmann approach through a categorization of behaviors of the velocity autocorrelation function in the Yukawa phase diagram. By using a velocity-dependent screening model, we examine the role of dynamical screening in transport. Implications of thesemore » results for Coulomb logarithm approaches are discussed.« less

  13. Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

    ERIC Educational Resources Information Center

    Acar, Tu¨lin

    2014-01-01

    In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…

  14. Intersubjective Culture: The Role of Intersubjective Perceptions in Cross-Cultural Research.

    PubMed

    Chiu, Chi-Yue; Gelfand, Michele J; Yamagishi, Toshio; Shteynberg, Garriy; Wan, Ching

    2010-07-01

    Intersubjective perceptions refer to shared perceptions of the psychological characteristics that are widespread within a culture. In this article, we propose the intersubjective approach as a new approach to understanding the role that culture plays in human behavior. In this approach, intersubjective perceptions, which are distinct from personal values and beliefs, mediate the effect of the ecology on individuals' responses and adaptations. We review evidence that attests to the validity and utility of the intersubjective approach in explicating culture's influence on human behaviors and discuss the implications of this approach for understanding the interaction between the individual, ecology, and culture; the nature of cultural competence; management of multicultural identities; cultural change; and measurement of culture. © The Author(s) 2010.

  15. Cultural and linguistic transferability of the multi-dimensional OxCAP-MH capability instrument for outcome measurement in mental health: the German language version.

    PubMed

    Simon, Judit; Łaszewska, Agata; Leutner, Eva; Spiel, Georg; Churchman, David; Mayer, Susanne

    2018-06-05

    Mental health conditions affect aspects of people's lives that are often not captured in common health-related outcome measures. The OxCAP-MH self-reported, quality of life questionnaire based on Sen's capability approach was developed in the UK to overcome these limitations. The aim of this study was to develop a linguistically and culturally valid German version of the questionnaire. Following forward and back translations, the wording underwent cultural and linguistic validation with input from a sample of 12 native German speaking mental health patients in Austria in 2015. Qualitative feedback from patients and carers was obtained via interviews and focus group meetings. Feedback from mental health researchers from Germany was incorporated to account for cross-country differences. No significant item modifications were necessary. However, changes due to ambiguous wordings, possibilities for differential interpretations, politically unacceptable expressions, cross-country language differences and differences in political and social systems, were needed. The study confirmed that all questions are relevant and understandable for people with mental health conditions in a German speaking setting and transferability of the questionnaire from English to German speaking countries is feasible. Professional translation is necessary for the linguistic accuracy of different language versions of patient-reported outcome measures but does not guarantee linguistic and cultural validity and cross-country transferability. Additional context-specific piloting is essential. The time and resources needed to achieve valid multi-lingual versions should not be underestimated. Further research is ongoing to confirm the psychometric properties of the German version.

  16. Cascade Back-Propagation Learning in Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2003-01-01

    The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.

  17. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  18. Assessment of Functional Rhinoplasty with Spreader Grafting Using Acoustic Rhinomanometry and Validated Outcome Measurements

    PubMed Central

    Paul, Marek A.; Kamali, Parisa; Chen, Austin D.; Ibrahim, Ahmed M. S.; Wu, Winona; Becherer, Babette E.; Medin, Caroline

    2018-01-01

    Background: Rhinoplasty is 1 of the most common aesthetic and reconstructive plastic surgical procedures performed within the United States. Yet, data on functional reconstructive open and closed rhinoplasty procedures with or without spreader graft placement are not definitive as only a few studies have examined both validated measurable objective and subjective outcomes of spreader grafting during rhinoplasty. The aim of this study was to utilize previously validated measures to assess objective, functional outcomes in patients who underwent open and closed rhinoplasty with spreader grafting. Methods: We performed a retrospective review of consecutive rhinoplasty patients. Patients with internal nasal valve insufficiency who underwent an open and closed approach rhinoplasty between 2007 and 2016 were studied. The Cottle test and Nasal Obstruction Symptom Evaluation survey was used to assess nasal obstruction. Patient-reported symptoms were recorded. Acoustic rhinometry was performed pre- and postoperatively. Average minimal cross-sectional area of the nose was measured. Results: One hundred seventy-eight patients were reviewed over a period of 8 years. Thirty-eight patients were included in this study. Of those, 30 patients underwent closed rhinoplasty and 8 open rhinoplasty. Mean age was 36.9 ± 18.4 years. The average cross-sectional area in closed and open rhinoplasty patients increased significantly (P = 0.019). There was a functional improvement in all presented cases using the Nasal Obstruction Symptom Evaluation scale evaluation. Conclusions: Closed rhinoplasty with spreader grafting may play a significant role in the treatment of nasal valve collapse. A closed approach rhinoplasty including spreader grafting is a viable option in select cases with objective and validated functional improvement. PMID:29707440

  19. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  20. Predicting length of children's psychiatric hospitalizations: an "ecologic" approach.

    PubMed

    Mossman, D; Songer, D A; Baker, D G

    1991-08-01

    This article describes the development and validation of a simple and modestly successful model for predicting inpatient length of stay (LOS) at a state-funded facility providing acute to long term care for children and adolescents in Ohio. Six variables--diagnostic group, legal status at time of admission, attending physician, age, sex, and county of residence--explained 30% of the variation in log10LOS in the subgroup used to create the model, and 26% of log10LOS variation in the cross-validation subgroup. The model also identified LOS outliers with moderate accuracy (ROC area = .68-0.76). The authors attribute the model's success to inclusion of variables that are correlated to idiosyncratic "ecologic" factors as well as variables related to severity of illness. Future attempts to construct LOS models may adopt similar approaches.

  1. A spectral-Tchebychev solution for three-dimensional dynamics of curved beams under mixed boundary conditions

    NASA Astrophysics Data System (ADS)

    Bediz, Bekir; Aksoy, Serdar

    2018-01-01

    This paper presents the application of the spectral-Tchebychev (ST) technique for solution of three-dimensional dynamics of curved beams/structures having variable and arbitrary cross-section under mixed boundary conditions. To accurately capture the vibrational behavior of curved structures, a three-dimensional (3D) solution approach is required since these structures generally exhibit coupled motions. In this study, the integral boundary value problem (IBVP) governing the dynamics of the curved structures is found using extended Hamilton's principle where the strain energy is expressed using 3D linear elasticity equation. To solve the IBVP numerically, the 3D spectral Tchebychev (3D-ST) approach is used. To evaluate the integral and derivative operations defined by the IBVP and to render the complex geometry into an equivalent straight beam with rectangular cross-section, a series of coordinate transformations are applied. To validate and assess the performance of the presented solution approach, two case studies are performed: (i) curved beam with rectangular cross-section, (ii) curved and pretwisted beam with airfoil cross-section. In both cases, the results (natural frequencies and mode shapes) are also found using a finite element (FE) solution approach. It is shown that the difference in predicted natural frequencies are less than 1%, and the mode shapes are in excellent agreement based on the modal assurance criteria (MAC) analyses; however, the presented spectral-Tchebychev solution approach significantly reduces the computational burden. Therefore, it can be concluded that the presented solution approach can capture the 3D vibrational behavior of curved beams as accurately as an FE solution, but for a fraction of the computational cost.

  2. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations confirm transferability of the relationships in space and time contingent upon full representation of validation conditions in the calibration data set. The ability of the top-down space-for-time models to predict in new time periods and locations lends confidence to their application for assessments and for improving finer time scale models.

  3. Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong

    ERIC Educational Resources Information Center

    Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger

    2016-01-01

    Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…

  4. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  5. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  6. A Predictive Approach to Network Reverse-Engineering

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2005-03-01

    A central challenge of systems biology is the ``reverse engineering" of transcriptional networks: inferring which genes exert regulatory control over which other genes. Attempting such inference at the genomic scale has only recently become feasible, via data-intensive biological innovations such as DNA microrrays (``DNA chips") and the sequencing of whole genomes. In this talk we present a predictive approach to network reverse-engineering, in which we integrate DNA chip data and sequence data to build a model of the transcriptional network of the yeast S. cerevisiae capable of predicting the response of genes in unseen experiments. The technique can also be used to extract ``motifs,'' sequence elements which act as binding sites for regulatory proteins. We validate by a number of approaches and present comparison of theoretical prediction vs. experimental data, along with biological interpretations of the resulting model. En route, we will illustrate some basic notions in statistical learning theory (fitting vs. over-fitting; cross- validation; assessing statistical significance), highlighting ways in which physicists can make a unique contribution in data- driven approaches to reverse engineering.

  7. Body Fluids as a Source of Diagnostic Biomarkers: Prostate — EDRN Public Portal

    Cancer.gov

    Recent advances in high-throughput protein expression profiling of bodily fluids has generated great enthusiasm and hope for this approach as a potent diagnostic tool. At the center of these efforts is the application of SELDI-TOF-MS and artificial intelligence algorithms by the EDRN BDL site at Eastern Virginia Medical School and the DMCC respectively. When the expression profiling process was applied to sera from individuals with prostate cancer (N=197), BPH (N=92) or from otherwise healthy donors (N=97) we achieved an overall misclassification rate of 90% sensitivity. Since this represents a noticeable improvement in current clinical approach we are proposing to embark upon a validation process. The described studies are designed to address validation issues and include three phases. Phase 1; Synchronization of SELDI Output within the EDRN-Prostate-SELDI Investigational Collaboration (EPSIC); addressing portability (A) Synchronize SELDI instrumentation and robotic sample processing across the EPSIC using pooled serum(QC); (B) Establish the portability and reproducibility of the SELDI protein profiling approach within the EPSIC using normal and prostate cancer patient’s serum from a single site; (C) Establish robustness of the approach toward geographic, sample collection and processing differences within EPSIC using case and control serum from five different sites. Phase 2; Population Validation Establish geographic variability and robustness in a large cross-sectional study among different sample population. Phase 3; Clinical Validation; validate the serum protein expression profiling coupled with a learning algorithm as a means for early detection of prostate cancer using longitudinal PCPT samples. We have assembled a cohesive multi-institutional team for completing these studies in a timely and efficient manner. The team consists of five EDRN laboratories, DMCC and CBI and the proposed budget reflects the total involvement.

  8. Assessing the impact of the Indian Ocean tsunami on households: a modified domestic assets index approach.

    PubMed

    Arlikatti, Sudha; Peacock, Walter Gillis; Prater, Carla S; Grover, Himanshu; Sekar, Arul S Gnana

    2010-07-01

    This paper offers a potential measurement solution for assessing disaster impacts and subsequent recovery at the household level by using a modified domestic assets index (MDAI) approach. Assessment of the utility of the domestic assets index first proposed by Bates, Killian and Peacock (1984) has been confined to earthquake areas in the Americas and southern Europe. This paper modifies and extends the approach to the Indian sub-continent and to coastal surge hazards utilizing data collected from 1,000 households impacted by the Indian Ocean tsunami (2004) in the Nagapattinam district of south-eastern India. The analyses suggest that the MDAI scale is a reliable and valid measure of household living conditions and is useful in assessing disaster impacts and tracking recovery efforts over time. It can facilitate longitudinal studies, encourage cross-cultural, cross-national comparisons of disaster impacts and inform national and international donors of the itemized monetary losses from disasters at the household level.

  9. Development of a cross-cultural item bank for measuring quality of life related to mental health in multiple sclerosis patients.

    PubMed

    Michel, Pierre; Auquier, Pascal; Baumstarck, Karine; Pelletier, Jean; Loundou, Anderson; Ghattas, Badih; Boyer, Laurent

    2015-09-01

    Quality of life (QoL) measurements are considered important outcome measures both for research on multiple sclerosis (MS) and in clinical practice. Computerized adaptive testing (CAT) can improve the precision of measurements made using QoL instruments while reducing the burden of testing on patients. Moreover, a cross-cultural approach is also necessary to guarantee the wide applicability of CAT. The aim of this preliminary study was to develop a calibrated item bank that is available in multiple languages and measures QoL related to mental health by combining one generic (SF-36) and one disease-specific questionnaire (MusiQoL). Patients with MS were enrolled in this international, multicenter, cross-sectional study. The psychometric properties of the item bank were based on classical test and item response theories and approaches, including the evaluation of unidimensionality, item response theory model fitting, and analyses of differential item functioning (DIF). Convergent and discriminant validities of the item bank were examined according to socio-demographic, clinical, and QoL features. A total of 1992 patients with MS and from 15 countries were enrolled in this study to calibrate the 22-item bank developed in this study. The strict monotonicity of the Cronbach's alpha curve, the high eigenvalue ratio estimator (5.50), and the adequate CFA model fit (RMSEA = 0.07 and CFI = 0.95) indicated that a strong assumption of unidimensionality was warranted. The infit mean square statistic ranged from 0.76 to 1.27, indicating a satisfactory item fit. DIF analyses revealed no item biases across geographical areas, confirming the cross-cultural equivalence of the item bank. External validity testing revealed that the item bank scores correlated significantly with QoL scores but also showed discriminant validity for socio-demographic and clinical characteristics. This work demonstrated satisfactory psychometric characteristics for a QoL item bank for MS in multiple languages. This work may offer a common measure for the assessment of QoL in different cultural contexts and for international studies conducted on MS.

  10. Cross-Validation of easyCBM Reading Cut Scores in Washington: 2009-2010. Technical Report #1109

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Washington state. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 900 students per grade, randomly split into two…

  11. Criterion for evaluating the predictive ability of nonlinear regression models without cross-validation.

    PubMed

    Kaneko, Hiromasa; Funatsu, Kimito

    2013-09-23

    We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.

  12. Supervised group Lasso with applications to microarray data analysis

    PubMed Central

    Ma, Shuangge; Song, Xiao; Huang, Jian

    2007-01-01

    Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods. PMID:17316436

  13. Development and Validation of a Musculoskeletal Model of the Fully Articulated Thoracolumbar Spine and Rib Cage

    PubMed Central

    Bruno, Alexander G.; Bouxsein, Mary L.; Anderson, Dennis E.

    2015-01-01

    We developed and validated a fully articulated model of the thoracolumbar spine in opensim that includes the individual vertebrae, ribs, and sternum. To ensure trunk muscles in the model accurately represent muscles in vivo, we used a novel approach to adjust muscle cross-sectional area (CSA) and position using computed tomography (CT) scans of the trunk sampled from a community-based cohort. Model predictions of vertebral compressive loading and trunk muscle tension were highly correlated to previous in vivo measures of intradiscal pressure (IDP), vertebral loading from telemeterized implants and trunk muscle myoelectric activity recorded by electromyography (EMG). PMID:25901907

  14. Temporal cross-correlation asymmetry and departure from equilibrium in a bistable chemical system.

    PubMed

    Bianca, C; Lemarchand, A

    2014-06-14

    This paper aims at determining sustained reaction fluxes in a nonlinear chemical system driven in a nonequilibrium steady state. The method relies on the computation of cross-correlation functions for the internal fluctuations of chemical species concentrations. By employing Langevin-type equations, we derive approximate analytical formulas for the cross-correlation functions associated with nonlinear dynamics. Kinetic Monte Carlo simulations of the chemical master equation are performed in order to check the validity of the Langevin equations for a bistable chemical system. The two approaches are found in excellent agreement, except for critical parameter values where the bifurcation between monostability and bistability occurs. From the theoretical point of view, the results imply that the behavior of cross-correlation functions cannot be exploited to measure sustained reaction fluxes in a specific nonlinear system without the prior knowledge of the associated chemical mechanism and the rate constants.

  15. Do recent observations of very large electromagnetic dissociation cross sections signify a transition towards non-perturbative QED?

    NASA Technical Reports Server (NTRS)

    Norbury, John W.

    1992-01-01

    The very large electromagnetic dissociation (EMD) cross section recently observed by Hill, Wohn, Schwellenbach, and Smith do not agree with Weizsacker-Williams (WW) theory or any simple modification thereof. Calculations are presented for the reaction probabilities for this experiment and the entire single and double nucleon removal EMD data set. It is found that for those few reactions where theory and experiment disagree, the probabilities are exceptionally large. This indicates that WW theory is not valid for these reactions and that one must consider higher order corrections and perhaps even a non-perturbative approach to quantum electrodynamics (QED).

  16. Assessing Saudi medical students learning approach using the revised two-factor study process questionnaire.

    PubMed

    Shaik, Shaffi Ahamed; Almarzuqi, Ahmed; Almogheer, Rakan; Alharbi, Omar; Jalal, Abdulaziz; Alorainy, Majed

    2017-08-17

    To assess learning approaches of 1st, 2nd, and 3rd-year medical students by using revised two-factor study process questionnaire, and to assess reliability and validity of the questionnaire. This cross-sectional study was conducted at the College of Medicine, Riyadh, Saudi Arabia in 2014. The revised two-factor study process questionnaire (R-SPQ-2F) was completed by 610 medical students of both genders, from foundation (first year), central nervous system (second year), medicine and surgery (third year) courses. The study process was evaluated by computing mean scores of two research study approaches (deep & surface) using student's t-test and one-way analysis of variance. The internal consistency and construct validity of the questionnaire were assessed using Cronbach's α and factor analysis. The mean score of deep approach was significantly higher than the surface approach among participants(t (770) =7.83, p= 0.000) for the four courses. The mean scores of deep approach were significantly higher among participants with higher grade point average (F (2,768) =13.31, p=0.001) along with more number of study hours by participants (F (2,768) =20.08, p=0.001). The Cronbach's α-values of items at 0.70 indicate the good internal consistency of questionnaire used. Factor analysis confirms two factors (deep and surface approaches) of R-SPQ-2F. The deep approach to learning was the primary approach among 1st, 2nd and 3rd-year King Saud University medical students. This study confirms reliability and validity of the revised two-factor study process questionnaire. Medical educators could use the results of such studies to make required changes in the curriculum.

  17. Predicting protein-binding regions in RNA using nucleotide profiles and compositions.

    PubMed

    Choi, Daesik; Park, Byungkyu; Chae, Hanju; Lee, Wook; Han, Kyungsook

    2017-03-14

    Motivated by the increased amount of data on protein-RNA interactions and the availability of complete genome sequences of several organisms, many computational methods have been proposed to predict binding sites in protein-RNA interactions. However, most computational methods are limited to finding RNA-binding sites in proteins instead of protein-binding sites in RNAs. Predicting protein-binding sites in RNA is more challenging than predicting RNA-binding sites in proteins. Recent computational methods for finding protein-binding sites in RNAs have several drawbacks for practical use. We developed a new support vector machine (SVM) model for predicting protein-binding regions in mRNA sequences. The model uses sequence profiles constructed from log-odds scores of mono- and di-nucleotides and nucleotide compositions. The model was evaluated by standard 10-fold cross validation, leave-one-protein-out (LOPO) cross validation and independent testing. Since actual mRNA sequences have more non-binding regions than protein-binding regions, we tested the model on several datasets with different ratios of protein-binding regions to non-binding regions. The best performance of the model was obtained in a balanced dataset of positive and negative instances. 10-fold cross validation with a balanced dataset achieved a sensitivity of 91.6%, a specificity of 92.4%, an accuracy of 92.0%, a positive predictive value (PPV) of 91.7%, a negative predictive value (NPV) of 92.3% and a Matthews correlation coefficient (MCC) of 0.840. LOPO cross validation showed a lower performance than the 10-fold cross validation, but the performance remains high (87.6% accuracy and 0.752 MCC). In testing the model on independent datasets, it achieved an accuracy of 82.2% and an MCC of 0.656. Testing of our model and other state-of-the-art methods on a same dataset showed that our model is better than the others. Sequence profiles of log-odds scores of mono- and di-nucleotides were much more powerful features than nucleotide compositions in finding protein-binding regions in RNA sequences. But, a slight performance gain was obtained when using the sequence profiles along with nucleotide compositions. These are preliminary results of ongoing research, but demonstrate the potential of our approach as a powerful predictor of protein-binding regions in RNA. The program and supporting data are available at http://bclab.inha.ac.kr/RBPbinding .

  18. Screening for postdeployment conditions: development and cross-validation of an embedded validity scale in the neurobehavioral symptom inventory.

    PubMed

    Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G

    2014-01-01

    To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.

  19. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  20. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  1. Cross-cultural adaptation of the Work Role Functioning Questionnaire 2.0 to Norwegian and Danish.

    PubMed

    Johansen, Thomas; Lund, Thomas; Jensen, Chris; Momsen, Anne-Mette Hedeager; Eftedal, Monica; Øyeflaten, Irene; Braathen, Tore N; Stapelfeldt, Christina M; Amick, Ben; Labriola, Merete

    2018-01-01

    A healthy and productive working life has attracted attention owing to future employment and demographic challenges. The aim was to translate and adapt the Work Role Functioning Questionnaire (WRFQ) 2.0 to Norwegian and Danish. The WRFQ is a self-administered tool developed to identify health-related work limitations. Standardised cross-cultural adaptation procedures were followed in both countries' translation processes. Direct translation, synthesis, back translation and consolidation were carried out successfully. A pre-test among 78 employees who had returned to work after sickness absence found idiomatic issues requiring reformulation in the instructions, four items in the Norwegian version, and three items in the Danish version, respectively. In the final versions, seven items were adjusted in each country. Psychometric properties were analysed for the Norwegian sample (n = 40) and preliminary Cronbach's alpha coefficients were satisfactory. A final consensus process was performed to achieve similar titles and introductions. The WRFQ 2.0 cross-cultural adaptation to Norwegian and Danish was performed and consensus was obtained. Future validation studies will examine validity, reliability, responsiveness and differential item response. The WRFQ can be used to elucidate both individual and work environmental factors leading to a more holistic approach in work rehabilitation.

  2. Genomic prediction of reproduction traits for Merino sheep.

    PubMed

    Bolormaa, S; Brown, D J; Swan, A A; van der Werf, J H J; Hayes, B J; Daetwyler, H D

    2017-06-01

    Economically important reproduction traits in sheep, such as number of lambs weaned and litter size, are expressed only in females and later in life after most selection decisions are made, which makes them ideal candidates for genomic selection. Accurate genomic predictions would lead to greater genetic gain for these traits by enabling accurate selection of young rams with high genetic merit. The aim of this study was to design and evaluate the accuracy of a genomic prediction method for female reproduction in sheep using daughter trait deviations (DTD) for sires and ewe phenotypes (when individual ewes were genotyped) for three reproduction traits: number of lambs born (NLB), litter size (LSIZE) and number of lambs weaned. Genomic best linear unbiased prediction (GBLUP), BayesR and pedigree BLUP analyses of the three reproduction traits measured on 5340 sheep (4503 ewes and 837 sires) with real and imputed genotypes for 510 174 SNPs were performed. The prediction of breeding values using both sire and ewe trait records was validated in Merino sheep. Prediction accuracy was evaluated by across sire family and random cross-validations. Accuracies of genomic estimated breeding values (GEBVs) were assessed as the mean Pearson correlation adjusted by the accuracy of the input phenotypes. The addition of sire DTD into the prediction analysis resulted in higher accuracies compared with using only ewe records in genomic predictions or pedigree BLUP. Using GBLUP, the average accuracy based on the combined records (ewes and sire DTD) was 0.43 across traits, but the accuracies varied by trait and type of cross-validations. The accuracies of GEBVs from random cross-validations (range 0.17-0.61) were higher than were those from sire family cross-validations (range 0.00-0.51). The GEBV accuracies of 0.41-0.54 for NLB and LSIZE based on the combined records were amongst the highest in the study. Although BayesR was not significantly different from GBLUP in prediction accuracy, it identified several candidate genes which are known to be associated with NLB and LSIZE. The approach provides a way to make use of all data available in genomic prediction for traits that have limited recording. © 2017 Stichting International Foundation for Animal Genetics.

  3. Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…

  4. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  5. Imputing data that are missing at high rates using a boosting algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cauthen, Katherine Regina; Lambert, Gregory; Ray, Jaideep

    Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, andmore » the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.« less

  6. Analysis of Nonequivalent Assessments across Different Linguistic Groups Using a Mixed Methods Approach: Understanding the Causes of Differential Item Functioning by Cognitive Interviewing

    ERIC Educational Resources Information Center

    Benítez, Isabel; Padilla, José-Luis

    2014-01-01

    Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase…

  7. A Data Mining Approach for Acoustic Diagnosis of Cardiopulmonary Disease

    DTIC Science & Technology

    2008-06-01

    chocolate chip cookies are amazing! This thesis was prepared at The Charles Stark Draper Laboratory, Inc., under Internal Company Research Project 21796...very expensive to perform. New medical technology has been the primary cause for the rising health care costs and insurance premiums. There are two...empirical risk minimization ( ERM ) principle. Generalization error can be minimized by using cross validation to select the best parameters for the

  8. Predictability and Coupled Dynamics of MJO During DYNAMO

    DTIC Science & Technology

    2013-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Predictability and Coupled Dynamics of MJO During DYNAMO ...Model (LIM) for MJO predictions and apply it in retrospective cross-validated forecast mode to the DYNAMO time period. APPROACH We are working as...a team to study MJO dynamics and predictability using several models as team members of the ONR DRI associated with the DYNAMO experiment. This is a

  9. Enhancing moderate-resolution ocean color products over coastal/inland waters (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pahlevan, Nima; Schott, John R.; Zibordi, Giuseppe

    2016-10-01

    With the successful launch of Landsat-8 in 2013 followed by a very recent launch of Sentinel-2A, we are entering a new area where frequent moderate resolution water quality products over coastal/inland waters will be available to scientists and operational agencies. Although designed for land observations, the Operational Land Imager (OLI) has proven to provide high-fidelity products in these aquatic systems where coarse-resolution ocean color imagers fail to provide valid observations. High-quality, multi-scale ocean color products can give insights into the biogeochemical/physical processes from the upstream in watersheds, into near-shore regions, and further out in ocean basins. In this research, we describe a robust cross-calibration approach, which facilitates seamless ocean color products at multi scales. The top-of-atmosphere (TOA) OLI imagery is cross-calibrated against near-simultaneous MODIS and VIIRS ocean color observations in high-latitude regions. This allows for not only examining the overall relative performance of OLI but also for characterizing non-uniformity (i.e., banding) across its swath. The uncertainty of this approach is, on average, found to be less than 0.5% in the blue channels. The adjustments made for OLI TOA reflectance products are then validated against in-situ measurements of remote sensing reflectance collected in research cruises or at the AERONET-OC.

  10. Spatiotemporal estimation of historical PM2.5 concentrations using PM10, meteorological variables, and spatial effect

    NASA Astrophysics Data System (ADS)

    Li, Lianfa; Wu, Anna H.; Cheng, Iona; Chen, Jiu-Chiuan; Wu, Jun

    2017-10-01

    Monitoring of fine particulate matter with diameter <2.5 μm (PM2.5) started from 1999 in the US and even later in many other countries. The lack of historical PM2.5 data limits epidemiological studies of long-term exposure of PM2.5 and health outcomes such as cancer. In this study, we aimed to design a flexible approach to reliably estimate historical PM2.5 concentrations by incorporating spatial effect and the measurements of existing co-pollutants such as particulate matter with diameter <10 μm (PM10) and meteorological variables. Monitoring data of PM10, PM2.5, and meteorological variables covering the entire state of California were obtained from 1999 through 2013. We developed a spatiotemporal model that quantified non-linear associations between PM2.5 concentrations and the following predictor variables: spatiotemporal factors (PM10 and meteorological variables), spatial factors (land-use patterns, traffic, elevation, distance to shorelines, and spatial autocorrelation), and season. Our model accounted for regional-(county) scale spatial autocorrelation, using spatial weight matrix, and local-scale spatiotemporal variability, using local covariates in additive non-linear model. The spatiotemporal model was evaluated, using leaving-one-site-month-out cross validation. Our final daily model had an R2 of 0.81, with PM10, meteorological variables, and spatial autocorrelation, explaining 55%, 10%, and 10% of the variance in PM2.5 concentrations, respectively. The model had a cross-validation R2 of 0.83 for monthly PM2.5 concentrations (N = 8170) and 0.79 for daily PM2.5 concentrations (N = 51,421) with few extreme values in prediction. Further, the incorporation of spatial effects reduced bias in predictions. Our approach achieved a cross validation R2 of 0.61 for the daily model when PM10 was replaced by total suspended particulate. Our model can robustly estimate historical PM2.5 concentrations in California when PM2.5 measurements were not available.

  11. Using thermochonology to validate a balanced cross section along the Karnali River, far-western Nepal

    NASA Astrophysics Data System (ADS)

    Battistella, C.; Robinson, D.; McQuarrie, N.; Ghoshal, S.

    2017-12-01

    Multiple valid balanced cross sections can be produced from mapped surface and subsurface data. By integrating low temperature thermochronologic data, we are better able to predict subsurface geometries. Existing valid balanced cross section for far western Nepal are few (Robinson et al., 2006) and do not incorporate thermochronologic data because the data did not exist. The data published along the Simikot cross section along the Karnali River since then include muscovite Ar, zircon U-Th/He and apatite fission track. We present new mapping and a new valid balanced cross section that takes into account the new field data as well as the limitations that thermochronologic data places on the kinematics of the cross section. Additional constrains include some new geomorphology data acquired since 2006 that indicate areas of increased vertical uplift, which indicate locations of buried ramps in the Main Himalayan thrust and guide the locations of Lesser Himalayan ramps in the balanced cross section. Future work will include flexural modeling, new low temperature thermochronometic data, and 2-D thermokinematic models from a sequentially forward modeled balanced cross sections in far western Nepal.

  12. In silico toxicity prediction by support vector machine and SMILES representation-based string kernel.

    PubMed

    Cao, D-S; Zhao, J-C; Yang, Y-N; Zhao, C-X; Yan, J; Liu, S; Hu, Q-N; Xu, Q-S; Liang, Y-Z

    2012-01-01

    There is a great need to assess the harmful effects or toxicities of chemicals to which man is exposed. In the present paper, the simplified molecular input line entry specification (SMILES) representation-based string kernel, together with the state-of-the-art support vector machine (SVM) algorithm, were used to classify the toxicity of chemicals from the US Environmental Protection Agency Distributed Structure-Searchable Toxicity (DSSTox) database network. In this method, the molecular structure can be directly encoded by a series of SMILES substrings that represent the presence of some chemical elements and different kinds of chemical bonds (double, triple and stereochemistry) in the molecules. Thus, SMILES string kernel can accurately and directly measure the similarities of molecules by a series of local information hidden in the molecules. Two model validation approaches, five-fold cross-validation and independent validation set, were used for assessing the predictive capability of our developed models. The results obtained indicate that SVM based on the SMILES string kernel can be regarded as a very promising and alternative modelling approach for potential toxicity prediction of chemicals.

  13. Determining Protein Complex Structures Based on a Bayesian Model of in Vivo Förster Resonance Energy Transfer (FRET) Data*

    PubMed Central

    Bonomi, Massimiliano; Pellarin, Riccardo; Kim, Seung Joong; Russel, Daniel; Sundin, Bryan A.; Riffle, Michael; Jaschob, Daniel; Ramsden, Richard; Davis, Trisha N.; Muller, Eric G. D.; Sali, Andrej

    2014-01-01

    The use of in vivo Förster resonance energy transfer (FRET) data to determine the molecular architecture of a protein complex in living cells is challenging due to data sparseness, sample heterogeneity, signal contributions from multiple donors and acceptors, unequal fluorophore brightness, photobleaching, flexibility of the linker connecting the fluorophore to the tagged protein, and spectral cross-talk. We addressed these challenges by using a Bayesian approach that produces the posterior probability of a model, given the input data. The posterior probability is defined as a function of the dependence of our FRET metric FRETR on a structure (forward model), a model of noise in the data, as well as prior information about the structure, relative populations of distinct states in the sample, forward model parameters, and data noise. The forward model was validated against kinetic Monte Carlo simulations and in vivo experimental data collected on nine systems of known structure. In addition, our Bayesian approach was validated by a benchmark of 16 protein complexes of known structure. Given the structures of each subunit of the complexes, models were computed from synthetic FRETR data with a distance root-mean-squared deviation error of 14 to 17 Å. The approach is implemented in the open-source Integrative Modeling Platform, allowing us to determine macromolecular structures through a combination of in vivo FRETR data and data from other sources, such as electron microscopy and chemical cross-linking. PMID:25139910

  14. Information flow between interacting human brains: Identification, validation, and relationship to social expertise

    PubMed Central

    Bilek, Edda; Ruf, Matthias; Schäfer, Axel; Akdeniz, Ceren; Calhoun, Vince D.; Schmahl, Christian; Demanuele, Charmaine; Tost, Heike; Kirsch, Peter; Meyer-Lindenberg, Andreas

    2015-01-01

    Social interactions are fundamental for human behavior, but the quantification of their neural underpinnings remains challenging. Here, we used hyperscanning functional MRI (fMRI) to study information flow between brains of human dyads during real-time social interaction in a joint attention paradigm. In a hardware setup enabling immersive audiovisual interaction of subjects in linked fMRI scanners, we characterize cross-brain connectivity components that are unique to interacting individuals, identifying information flow between the sender’s and receiver’s temporoparietal junction. We replicate these findings in an independent sample and validate our methods by demonstrating that cross-brain connectivity relates to a key real-world measure of social behavior. Together, our findings support a central role of human-specific cortical areas in the brain dynamics of dyadic interactions and provide an approach for the noninvasive examination of the neural basis of healthy and disturbed human social behavior with minimal a priori assumptions. PMID:25848050

  15. sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Heng; Ye, Hao; Ng, Hui Wen

    Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less

  16. sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides

    PubMed Central

    Luo, Heng; Ye, Hao; Ng, Hui Wen; Sakkiah, Sugunadevi; Mendrick, Donna L.; Hong, Huixiao

    2016-01-01

    Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. This algorithm can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system. PMID:27558848

  17. sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides

    DOE PAGES

    Luo, Heng; Ye, Hao; Ng, Hui Wen; ...

    2016-08-25

    Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less

  18. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  19. Development of novel in silico model for developmental toxicity assessment by using naïve Bayes classifier method.

    PubMed

    Zhang, Hui; Ren, Ji-Xia; Kang, Yan-Li; Bo, Peng; Liang, Jun-Yu; Ding, Lan; Kong, Wei-Bao; Zhang, Ji

    2017-08-01

    Toxicological testing associated with developmental toxicity endpoints are very expensive, time consuming and labor intensive. Thus, developing alternative approaches for developmental toxicity testing is an important and urgent task in the drug development filed. In this investigation, the naïve Bayes classifier was applied to develop a novel prediction model for developmental toxicity. The established prediction model was evaluated by the internal 5-fold cross validation and external test set. The overall prediction results for the internal 5-fold cross validation of the training set and external test set were 96.6% and 82.8%, respectively. In addition, four simple descriptors and some representative substructures of developmental toxicants were identified. Thus, we hope the established in silico prediction model could be used as alternative method for toxicological assessment. And these obtained molecular information could afford a deeper understanding on the developmental toxicants, and provide guidance for medicinal chemists working in drug discovery and lead optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Model selection for pion photoproduction

    DOE PAGES

    Landay, J.; Doring, M.; Fernandez-Ramirez, C.; ...

    2017-01-12

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the S matrix are implemented to a different degree in different approaches; but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the least absolute shrinkage and selection operator (LASSO) in combination with criteria from information theory andmore » K-fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. As a result, the principle is first illustrated with synthetic data; then, its feasibility for real data is demonstrated by analyzing the latest available measurements of differential cross sections (dσ/dΩ), photon-beam asymmetries (Σ), and target asymmetry differential cross sections (dσ T/d≡Tdσ/dΩ) in the low-energy regime.« less

  1. Indicators of Ecological Change

    DTIC Science & Technology

    2005-03-01

    Vine 0.07 0.26 Yellow Jasmine LM Gymnopogon ambiguus Graminae Cryptophyte Geophyte Grass 0.17 0.45 Beard grass RLD Haplopappus divaricatus Asteraceae...cross-validation procedure. The cross-validation analysis 7 determines the percentage of observations correctly classified. In essence , a cross-8

  2. The regulatory framework for preventing cross-contamination of pharmaceutical products: History and considerations for the future.

    PubMed

    Sargent, Edward V; Flueckiger, Andreas; Barle, Ester Lovsin; Luo, Wendy; Molnar, Lance R; Sandhu, Reena; Weideman, Patricia A

    2016-08-01

    Cross-contamination in multi-product pharmaceutical manufacturing facilities can impact both product safety and quality. This issue has been recognized by regulators and industry for some time, leading to publication of a number of continually evolving guidelines. This manuscript provides a historical overview of the regulatory framework for managing cross-contamination in multi-product facilities to provide context for current approaches. Early guidelines focused on the types of pharmaceuticals for which dedicated facilities and control systems were needed, and stated the requirements for cleaning validation. More recent guidelines have promoted the idea of using Acceptable Daily Exposures (ADEs) to establish cleaning limits for actives and other potentially hazardous substances. The ADE approach is considered superior to previous methods for setting cleaning limits such as using a predetermined general limit (e.g., 10 ppm or a fraction of the median lethal dose (LD50) or therapeutic dose). The ADEs can be used to drive the cleaning process and as part of the overall assessment of whether dedicated production facilities are required. While great strides have been made in using the ADE approach, work remains to update good manufacturing practices (GMPs) to ensure that the approaches are clear, consistent with the state-of-the-science, and broadly applicable yet flexible enough for adaptation to unique products and situations. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Validation of a heteroscedastic hazards regression model.

    PubMed

    Wu, Hong-Dar Isaac; Hsieh, Fushing; Chen, Chen-Hsin

    2002-03-01

    A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. It by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial.

  4. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  5. Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.

    PubMed

    Pasolli, Edoardo; Truong, Duy Tin; Malik, Faizan; Waldron, Levi; Segata, Nicola

    2016-07-01

    Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the "healthy" microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at http://segatalab.cibio.unitn.it/tools/metaml.

  6. Cross-Validation of Predictor Equations for Armor Crewman Performance

    DTIC Science & Technology

    1980-01-01

    Technical Report 447 CROSS-VALIDATION OF PREDICTOR EQUATIONS FOR ARMOR CREWMAN PERFORMANCE Anthony J. Maitland , Newell K. Eaton, and Janet F. Neft...ORG. REPORT NUMBER Anthony J/ Maitland . Newell K/EatorV. and B OTATO RN UBR. 9- PERFORMING ORGANIZATION NAME AND ADDRESS I0. PROGRAM ELEMENT, PROJECT...Technical Report 447 CROSS-VALIDATION OF PREDICTOR EQUATIONS FOR ARMOR CREWMAN PERFORMANCE Anthony J. Maitland , Newell K. Eaton, Accession For and

  7. Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.

    PubMed

    Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong

    2018-06-01

    Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.

  8. Dental attrition models predicting temporomandibular joint disease or masticatory muscle pain versus asymptomatic controls.

    PubMed

    Seligman, D A; Pullinger, A G

    2006-11-01

    To determine whether patients with temporomandibular joint disease or masticatory muscle pain can be usefully differentiated from asymptomatic controls using multifactorial classification tree models of attrition severity and/or rates. Measures of attrition severity and rates in patients diagnosed with disc displacement (n = 52), osteoarthrosis (n = 74), or masticatory muscle pain only (n = 43) were compared against those in asymptomatic controls (n = 132). Cross-validated classification tree models were tested for fit with sensitivity, specificity, accuracy and log likelihood accountability. The model for identifying asymptomatic controls only required the three measures of attrition severity (anterior, mediotrusive and laterotrusive posterior) to be differentiated from the patients with a 74.2 +/- 3.8% cross-validation accuracy. This compared with cross-validation accuracies of 69.7 +/- 3.7% for differentiating disc displacement using anterior and laterotrusive attrition severity, 68.7 +/- 3.9% for differentiating disc displacement using anterior and laterotrusive attrition rates, 70.9 +/- 3.3% for differentiating osteoarthrosis using anterior attrition severity and rates, 94.6 +/- 2.1% for differentiating myofascial pain using mediotrusive and laterotrusive attrition severity, and 92.0 +/- 2.1% for differentiating myofascial pain using mediotrusive and anterior attrition rates. The myofascial pain models exceeded the > or =75% sensitivity and > or =90% specificity thresholds recommended for diagnostic tests, and the asymptomatic control model approached these thresholds. Multifactorial models using attrition severity and rates may differentiate masticatory muscle pain patients from asymptomatic controls, and have some predictive value for differentiating intracapsular temporomandibular disorder patients as well.

  9. High-definition fiber tractography of the human brain: neuroanatomical validation and neurosurgical applications.

    PubMed

    Fernandez-Miranda, Juan C; Pathak, Sudhir; Engh, Johnathan; Jarbo, Kevin; Verstynen, Timothy; Yeh, Fang-Cheng; Wang, Yibao; Mintz, Arlan; Boada, Fernando; Schneider, Walter; Friedlander, Robert

    2012-08-01

    High-definition fiber tracking (HDFT) is a novel combination of processing, reconstruction, and tractography methods that can track white matter fibers from cortex, through complex fiber crossings, to cortical and subcortical targets with subvoxel resolution. To perform neuroanatomical validation of HDFT and to investigate its neurosurgical applications. Six neurologically healthy adults and 36 patients with brain lesions were studied. Diffusion spectrum imaging data were reconstructed with a Generalized Q-Ball Imaging approach. Fiber dissection studies were performed in 20 human brains, and selected dissection results were compared with tractography. HDFT provides accurate replication of known neuroanatomical features such as the gyral and sulcal folding patterns, the characteristic shape of the claustrum, the segmentation of the thalamic nuclei, the decussation of the superior cerebellar peduncle, the multiple fiber crossing at the centrum semiovale, the complex angulation of the optic radiations, the terminal arborization of the arcuate tract, and the cortical segmentation of the dorsal Broca area. From a clinical perspective, we show that HDFT provides accurate structural connectivity studies in patients with intracerebral lesions, allowing qualitative and quantitative white matter damage assessment, aiding in understanding lesional patterns of white matter structural injury, and facilitating innovative neurosurgical applications. High-grade gliomas produce significant disruption of fibers, and low-grade gliomas cause fiber displacement. Cavernomas cause both displacement and disruption of fibers. Our HDFT approach provides an accurate reconstruction of white matter fiber tracts with unprecedented detail in both the normal and pathological human brain. Further studies to validate the clinical findings are needed.

  10. Using the risk behaviour diagnosis scale to understand Australian Aboriginal smoking — A cross-sectional validation survey in regional New South Wales

    PubMed Central

    Gould, Gillian Sandra; Watt, Kerrianne; Cadet-James, Yvonne; Clough, Alan R.

    2014-01-01

    Objective To validate, for the first time, the Risk Behaviour Diagnosis (RBD) Scale for Aboriginal Australian tobacco smokers, based on the Extended Parallel Process Model (EPPM). Despite high smoking prevalence, little is known about how Indigenous peoples assess their smoking risks. Methods In a cross-sectional study of 121 aboriginal smokers aged 18–45 in regional New South Wales, in 2014, RBD subscales were assessed for internal consistency. Scales included measures of perceived threat (susceptibility to and severity of smoking risks) and perceived efficacy (response efficacy and self-efficacy for quitting). An Aboriginal community panel appraised face and content validity. EPPM constructs of danger control (protective motivation) and fear control (defensive motivation) were assessed for cogency. Results Scales had acceptable to good internal consistency (Cronbach's alpha = 0.65–1.0). Most participants demonstrated high-perceived threat (77%, n = 93); and half had high-perceived efficacy (52%, n = 63). High-perceived efficacy with high-threat appeared consistent with danger control dominance; low-perceived efficacy with high-threat was consistent with fear control dominance. Conclusions In these Aboriginal smokers of reproductive age, the RBD Scale appeared valid and reliable. Further research is required to assess whether the RBD Scale and EPPM can predict quit attempts and assist with tailored approaches to counselling and targeted health promotion campaigns. PMID:26844043

  11. Single station monitoring of volcanoes using seismic ambient noise

    NASA Astrophysics Data System (ADS)

    De Plaen, R. S.; Lecocq, T.; Caudron, C.; Ferrazzini, V.; Francis, O.

    2016-12-01

    During volcanic eruptions, magma transport causes gas release, pressure perturbations and fracturing in the plumbing system. The potential subsequent surface deformation that can be detected using geodetic techniques and deep mechanical processes associated with magma pressurization and/or migration and their spatial-temporal evolution can be monitored with volcanic seismicity. However, these techniques respectively suffer from limited sensitivity to deep changes and a too short-term temporal distribution to expose early aseismic processes such as magma pressurisation. Seismic ambient noise cross-correlation uses the multiple scattering of seismic vibrations by heterogeneities in the crust to retrieves the Green's function for surface waves between two stations by cross-correlating these diffuse wavefields. Seismic velocity changes are then typically measured from the cross-correlation functions with applications for volcanoes, large magnitude earthquakes in the far field and smaller magnitude earthquakes at smaller distances. This technique is increasingly used as a non-destructive way to continuously monitor small seismic velocity changes ( 0.1%) associated with volcanic activity, although it is usually limited to volcanoes equipped with large and dense networks of broadband stations. The single-station approach may provide a powerful and reliable alternative to the classical "cross-stations" approach when measuring variation of seismic velocities. We implemented it on the Piton de la Fournaise in Reunion Island, a very active volcano with a remarkable multi-disciplinary continuous monitoring. Over the past decade, this volcano was increasingly studied using the traditional cross-station approach and therefore represents a unique laboratory to validate our approach. Our results, tested on stations located up to 3.5 km from the eruptive site, performed as well as the classical approach to detect the volcanic eruption in the 1-2 Hz frequency band. This opens new perspectives to successfully forecast volcanic activity at volcanoes equipped with a single 3-component seismometer.

  12. Cross-Validation of the Africentrism Scale.

    ERIC Educational Resources Information Center

    Kwate, Naa Oyo A.

    2003-01-01

    Cross-validated the Africentrism Scale, investigating the relationship between Africentrism and demographic variables in a diverse sample of individuals of African descent. Results indicated that the scale demonstrated solid internal consistency and convergent validity. Age and education related to Africentrism, with younger and less educated…

  13. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  14. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  15. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE PAGES

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling; ...

    2018-03-25

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  16. Characterization of shrubland ecosystem components as continuous fields in the northwest United States

    USGS Publications Warehouse

    Xian, George Z.; Homer, Collin G.; Rigge, Matthew B.; Shi, Hua; Meyer, Debbie

    2015-01-01

    Accurate and consistent estimates of shrubland ecosystem components are crucial to a better understanding of ecosystem conditions in arid and semiarid lands. An innovative approach was developed by integrating multiple sources of information to quantify shrubland components as continuous field products within the National Land Cover Database (NLCD). The approach consists of several procedures including field sample collections, high-resolution mapping of shrubland components using WorldView-2 imagery and regression tree models, Landsat 8 radiometric balancing and phenological mosaicking, medium resolution estimates of shrubland components following different climate zones using Landsat 8 phenological mosaics and regression tree models, and product validation. Fractional covers of nine shrubland components were estimated: annual herbaceous, bare ground, big sagebrush, herbaceous, litter, sagebrush, shrub, sagebrush height, and shrub height. Our study area included the footprint of six Landsat 8 scenes in the northwestern United States. Results show that most components have relatively significant correlations with validation data, have small normalized root mean square errors, and correspond well with expected ecological gradients. While some uncertainties remain with height estimates, the model formulated in this study provides a cross-validated, unbiased, and cost effective approach to quantify shrubland components at a regional scale and advances knowledge of horizontal and vertical variability of these components.

  17. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  18. Measurement properties of the QuickDASH (disabilities of the arm, shoulder and hand) outcome measure and cross-cultural adaptations of the QuickDASH: a systematic review.

    PubMed

    Kennedy, Carol A; Beaton, Dorcas E; Smith, Peter; Van Eerd, Dwayne; Tang, Kenneth; Inrig, Taucha; Hogg-Johnson, Sheilah; Linton, Denise; Couban, Rachel

    2013-11-01

    To identify and synthesize evidence for the measurement properties of the QuickDASH, a shortened version of the 30-item DASH (Disabilities of the Arm, Shoulder and Hand) instrument. This systematic review used a best evidence synthesis approach to critically appraise the measurement properties [using COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN)] of the QuickDASH and cross-cultural adaptations. A standard search strategy was conducted between 2005 (year of first publication of QuickDASH) and March 2011 in MEDLINE, EMBASE and CINAHL. The search identified 14 studies to include in the best evidence synthesis of the QuickDASH. A further 11 studies were identified on eight cross-cultural adaptation versions. Many measurement properties of the QuickDASH have been evaluated in multiple studies and across most of the measurement properties. The best evidence synthesis of the QuickDASH English version suggests that this tool is performing well with strong positive evidence for reliability and validity (hypothesis testing), and moderate positive evidence for structural validity testing. Strong negative evidence was found for responsiveness due to lower correlations with global estimates of change. Information about the measurement properties of the cross-cultural adaptation versions is still lacking, or the available information is of poor overall methodological quality.

  19. Combined High Spectral Resolution Lidar and Millimeter Wavelength Radar Measurement of Ice Crystal Precipitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eloranta, Edwin

    The goal of this research has been to improve measurements of snowfall using a combination of millimeter-wavelength radar and High Spectral Resolution Lidar (HSRL) Observations. Snowflakes are large compared to the 532nm HSRL wavelength and small compared to the 3.2 and 8.6 mm wavelength radars used in this study. This places the particles in the optical scattering regime of the HSRL, where extinction cross-section is proportional to the projected area of the particles, and in the Rayleigh regime for the radar, where the backscatter cross-section is proportional to the mass-squared of the particles. Forming a ratio of the radar measuredmore » cross-section to the HSRL measured cross section eliminates any dependence on the number of scattering particles, yielding a quantity proportional to the average mass-squared of the snowflakes over the average area of the flakes. Using simultaneous radar measurements of particle fall velocities, which are dependent particle mass and cross-sectional area it is possible to derive the average mass of the snow flakes, and with the radar measured fall velocities compute the snowfall rate. Since this retrieval requires the optical extinction cross-section we began by considering errors this quantity. The HSRL is particularly good at measuring the backscatter cross-section. In previous studies of snowfall in the high Arctic were able to estimate the extinction cross-section directly as a fixed ratio to the backscatter cross-section. Measurements acquired in the STORMVEX experiment in Colorado showed that this approach was not valid in mid-latitude snowfalls and that direct measurement of the extinction cross-section is required. Attempts to measure the extinction directly uncovered shortcomings in thermal regulation and mechanical stability of the newly deployed DOE HSRL systems. These problems were largely mitigated by modifications installed in both of the DOE systems. We also investigated other sources of error in the HSRL direct measurement of extinction (see appendix II of this report). We also developed improved algorithms to extract extinction from the HSRL data. These have been installed in the standard HSRL data processing software and are now available to all users of HSRL data. Validation of snowfall measurements has proven difficult due to the unreliability of conventional snowfall measurements coupled with the complexity of considering the vast variety of snowflake geometries. It was difficult to tell how well the algorithm’s approach to accommodating differences in snowflakes was working without good measurements for comparison. As a result, we decided to apply this approach to the somewhat simpler, but scientifically important, problem of drizzle measurement. Here the particle shape is known and the conventional measurement are more reliable. These algorithms where successfully applied to drizzle data acquired during the ARM MAGIC study of marine stratus clouds between California and Hawaii (see Appendix I). This technique is likely to become a powerful tool for studying lifetime of the climatically important marine stratus clouds.« less

  20. Developing the Polish Educational Needs Assessment Tool (Pol-ENAT) in rheumatoid arthritis and systemic sclerosis: a cross-cultural validation study using Rasch analysis.

    PubMed

    Sierakowska, Matylda; Sierakowski, Stanisław; Sierakowska, Justyna; Horton, Mike; Ndosi, Mwidimi

    2015-03-01

    To undertake cross-cultural adaptation and validation of the educational needs assessment tool (ENAT) for use with people with rheumatoid arthritis (RA) and systemic sclerosis (SSc) in Poland. The study involved two main phases: (1) cross-cultural adaptation of the ENAT from English into Polish and (2) Cross-cultural validation of Polish Educational Needs Assessment Tool (Pol-ENAT). The first phase followed an established process of cross-cultural adaptation of self-report measures. The second phase involved completion of the Pol-ENAT by patients and subjecting the data to Rasch analysis to assess the construct validity, unidimensionality, internal consistency and cross-cultural invariance. An adequate conceptual equivalence was achieved following the adaptation process. The dataset for validation comprised a total of 278 patients, 237 (85.3 %) of which were female. In each disease group (145, RA and 133, SSc), the 7 domains of the Pol-ENAT were found to fit the Rasch model, X (2)(df) = 16.953(14), p = 0.259 and 8.132(14), p = 0.882 for RA and SSc, respectively. Internal consistency of the Pol-ENAT was high (patient separation index = 0.85 and 0.89 for SSc and RA, respectively), and unidimensionality was confirmed. Cross-cultural differential item functioning (DIF) was detected in some subscales, and DIF-adjusted conversion tables were calibrated to enable cross-cultural comparison of data between Poland and the UK. Using a standard process in cross-cultural adaptation, conceptual equivalence was achieved between the original (UK) ENAT and the adapted Pol-ENAT. Fit to the Rasch model, confirmed that the construct validity, unidimensionality and internal consistency of the ENAT have been preserved.

  1. Resolving the double tension: Toward a new approach to measurement modeling in cross-national research

    NASA Astrophysics Data System (ADS)

    Medina, Tait Runnfeldt

    The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and explicates a link between theoretical conceptions of cross-national comparability and a statistical method. Second, it provides a clear and detailed discussion of model identification in multiple-group confirmatory factor analysis that is missing from the literature. This discussion sets the stage for the introduction of the identification problem within multiple-group confirmatory factor analysis with partial measurement invariance and the alternative approach to model identification employed by the alignment method. Third, it offers the first pedagogical presentation of the alignment method using a sociologically relevant example.

  2. Rapid prediction of ochratoxin A-producing strains of Penicillium on dry-cured meat by MOS-based electronic nose.

    PubMed

    Lippolis, Vincenzo; Ferrara, Massimo; Cervellieri, Salvatore; Damascelli, Anna; Epifani, Filomena; Pascale, Michelangelo; Perrone, Giancarlo

    2016-02-02

    The availability of rapid diagnostic methods for monitoring ochratoxigenic species during the seasoning processes for dry-cured meats is crucial and constitutes a key stage in order to prevent the risk of ochratoxin A (OTA) contamination. A rapid, easy-to-perform and non-invasive method using an electronic nose (e-nose) based on metal oxide semiconductors (MOS) was developed to discriminate dry-cured meat samples in two classes based on the fungal contamination: class P (samples contaminated by OTA-producing Penicillium strains) and class NP (samples contaminated by OTA non-producing Penicillium strains). Two OTA-producing strains of Penicillium nordicum and two OTA non-producing strains of Penicillium nalgiovense and Penicillium salamii, were tested. The feasibility of this approach was initially evaluated by e-nose analysis of 480 samples of both Yeast extract sucrose (YES) and meat-based agar media inoculated with the tested Penicillium strains and incubated up to 14 days. The high recognition percentages (higher than 82%) obtained by Discriminant Function Analysis (DFA), either in calibration and cross-validation (leave-more-out approach), for both YES and meat-based samples demonstrated the validity of the used approach. The e-nose method was subsequently developed and validated for the analysis of dry-cured meat samples. A total of 240 e-nose analyses were carried out using inoculated sausages, seasoned by a laboratory-scale process and sampled at 5, 7, 10 and 14 days. DFA provided calibration models that permitted discrimination of dry-cured meat samples after only 5 days of seasoning with mean recognition percentages in calibration and cross-validation of 98 and 88%, respectively. A further validation of the developed e-nose method was performed using 60 dry-cured meat samples produced by an industrial-scale seasoning process showing a total recognition percentage of 73%. The pattern of volatile compounds of dry-cured meat samples was identified and characterized by a developed HS-SPME/GC-MS method. Seven volatile compounds (2-methyl-1-butanol, octane, 1R-α-pinene, d-limonene, undecane, tetradecanal, 9-(Z)-octadecenoic acid methyl ester) allowed discrimination between dry-cured meat samples of classes P and NP. These results demonstrate that MOS-based electronic nose can be a useful tool for a rapid screening in preventing OTA contamination in the cured meat supply chain. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Methods to compute reliabilities for genomic predictions of feed intake

    USDA-ARS?s Scientific Manuscript database

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  4. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning.

    PubMed

    Treder, Maximilian; Lauermann, Jost Lennart; Eter, Nicole

    2018-02-01

    Our purpose was to use deep learning for the automated detection of age-related macular degeneration (AMD) in spectral domain optical coherence tomography (SD-OCT). A total of 1112 cross-section SD-OCT images of patients with exudative AMD and a healthy control group were used for this study. In the first step, an open-source multi-layer deep convolutional neural network (DCNN), which was pretrained with 1.2 million images from ImageNet, was trained and validated with 1012 cross-section SD-OCT scans (AMD: 701; healthy: 311). During this procedure training accuracy, validation accuracy and cross-entropy were computed. The open-source deep learning framework TensorFlow™ (Google Inc., Mountain View, CA, USA) was used to accelerate the deep learning process. In the last step, a created DCNN classifier, using the information of the above mentioned deep learning process, was tested in detecting 100 untrained cross-section SD-OCT images (AMD: 50; healthy: 50). Therefore, an AMD testing score was computed: 0.98 or higher was presumed for AMD. After an iteration of 500 training steps, the training accuracy and validation accuracies were 100%, and the cross-entropy was 0.005. The average AMD scores were 0.997 ± 0.003 in the AMD testing group and 0.9203 ± 0.085 in the healthy comparison group. The difference between the two groups was highly significant (p < 0.001). With a deep learning-based approach using TensorFlow™, it is possible to detect AMD in SD-OCT with high sensitivity and specificity. With more image data, an expansion of this classifier for other macular diseases or further details in AMD is possible, suggesting an application for this model as a support in clinical decisions. Another possible future application would involve the individual prediction of the progress and success of therapy for different diseases by automatically detecting hidden image information.

  5. Issues in cross-cultural validity: example from the adaptation, reliability, and validity testing of a Turkish version of the Stanford Health Assessment Questionnaire.

    PubMed

    Küçükdeveci, Ayse A; Sahin, Hülya; Ataman, Sebnem; Griffiths, Bridget; Tennant, Alan

    2004-02-15

    Guidelines have been established for cross-cultural adaptation of outcome measures. However, invariance across cultures must also be demonstrated through analysis of Differential Item Functioning (DIF). This is tested in the context of a Turkish adaptation of the Health Assessment Questionnaire (HAQ). Internal construct validity of the adapted HAQ is assessed by Rasch analysis; reliability, by internal consistency and the intraclass correlation coefficient; external construct validity, by association with impairments and American College of Rheumatology functional stages. Cross-cultural validity is tested through DIF by comparison with data from the UK version of the HAQ. The adapted version of the HAQ demonstrated good internal construct validity through fit of the data to the Rasch model (mean item fit 0.205; SD 0.998). Reliability was excellent (alpha = 0.97) and external construct validity was confirmed by expected associations. DIF for culture was found in only 1 item. Cross-cultural validity was found to be sufficient for use in international studies between the UK and Turkey. Future adaptation of instruments should include analysis of DIF at the field testing stage in the adaptation process.

  6. Cross-Cultural Competence in the Department of Defense: An Annotated Bibliography

    DTIC Science & Technology

    2014-04-01

    computer toward the best possible strategy. The article outlines, in detail, how the game is played in theory as well as how it was played in this...validation of the CQS: The cultural intelligence scale. In S. Ang & L. Van Dyne (Eds.), Handbook of cultural intelligence: Theory , measurement, and...weaknesses of various approaches, general learning theory , and the utility of employing civilian style education to prepare Soldiers to interact in

  7. Defilade, Stationary Target and Moving Target Embankment, Low Water Crossing, and Course Road Designs for Soil Loss Prevention

    DTIC Science & Technology

    2006-11-01

    Avenue Urbana , IL 61801-4797 Final report Approved for public release; distribution is unlimited. Prepared for U.S. Army Corps of Engineers...allow validation of each structure’s effectiveness. Approach A research team consisting of members from the University of Illinois, Urbana ...and preferential drainage channel. Furthermore, dry weather conditions on unimproved ERDC/CERL TR-06-31 4 roads generate large volumes of airborne

  8. The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).

    ERIC Educational Resources Information Center

    Berryman, Joan D.; Neal, W. R. Jr.

    1980-01-01

    Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…

  9. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    PubMed

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  10. Contrasting analytical and data-driven frameworks for radiogenomic modeling of normal tissue toxicities in prostate cancer.

    PubMed

    Coates, James; Jeyaseelan, Asha K; Ybarra, Norma; David, Marc; Faria, Sergio; Souhami, Luis; Cury, Fabio; Duclos, Marie; El Naqa, Issam

    2015-04-01

    We explore analytical and data-driven approaches to investigate the integration of genetic variations (single nucleotide polymorphisms [SNPs] and copy number variations [CNVs]) with dosimetric and clinical variables in modeling radiation-induced rectal bleeding (RB) and erectile dysfunction (ED) in prostate cancer patients. Sixty-two patients who underwent curative hypofractionated radiotherapy (66 Gy in 22 fractions) between 2002 and 2010 were retrospectively genotyped for CNV and SNP rs5489 in the xrcc1 DNA repair gene. Fifty-four patients had full dosimetric profiles. Two parallel modeling approaches were compared to assess the risk of severe RB (Grade⩾3) and ED (Grade⩾1); Maximum likelihood estimated generalized Lyman-Kutcher-Burman (LKB) and logistic regression. Statistical resampling based on cross-validation was used to evaluate model predictive power and generalizability to unseen data. Integration of biological variables xrcc1 CNV and SNP improved the fit of the RB and ED analytical and data-driven models. Cross-validation of the generalized LKB models yielded increases in classification performance of 27.4% for RB and 14.6% for ED when xrcc1 CNV and SNP were included, respectively. Biological variables added to logistic regression modeling improved classification performance over standard dosimetric models by 33.5% for RB and 21.2% for ED models. As a proof-of-concept, we demonstrated that the combination of genetic and dosimetric variables can provide significant improvement in NTCP prediction using analytical and data-driven approaches. The improvement in prediction performance was more pronounced in the data driven approaches. Moreover, we have shown that CNVs, in addition to SNPs, may be useful structural genetic variants in predicting radiation toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Validation of geometric measurements of the left atrium and pulmonary veins for analysis of reverse structural remodeling following ablation therapy

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Holmes, D. R., III; Gunawan, M. S.; Ge, X.; Karwoski, R. A.; Breen, J. F.; Packer, D. L.; Robb, R. A.

    2012-03-01

    Geometric analysis of the left atrium and pulmonary veins is important for studying reverse structural remodeling following cardiac ablation therapy. It has been shown that the left atrium decreases in volume and the pulmonary vein ostia decrease in diameter following ablation therapy. Most analysis techniques, however, require laborious manual tracing of image cross-sections. Pulmonary vein diameters are typically measured at the junction between the left atrium and pulmonary veins, called the pulmonary vein ostia, with manually drawn lines on volume renderings or on image cross-sections. In this work, we describe a technique for making semi-automatic measurements of the left atrium and pulmonary vein ostial diameters from high resolution CT scans and multi-phase datasets. The left atrium and pulmonary veins are segmented from a CT volume using a 3D volume approach and cut planes are interactively positioned to separate the pulmonary veins from the body of the left atrium. The cut plane is also used to compute the pulmonary vein ostial diameter. Validation experiments are presented which demonstrate the ability to repeatedly measure left atrial volume and pulmonary vein diameters from high resolution CT scans, as well as the feasibility of this approach for analyzing dynamic, multi-phase datasets. In the high resolution CT scans the left atrial volume measurements show high repeatability with approximately 4% intra-rater repeatability and 8% inter-rater repeatability. Intra- and inter-rater repeatability for pulmonary vein diameter measurements range from approximately 2 to 4 mm. For the multi-phase CT datasets, differences in left atrial volumes between a standard slice-by-slice approach and the proposed 3D volume approach are small, with percent differences on the order of 3% to 6%.

  12. Comparing health system performance assessment and management approaches in the Netherlands and Ontario, Canada

    PubMed Central

    Tawfik-Shukor, Ali R; Klazinga, Niek S; Arah, Onyebuchi A

    2007-01-01

    Background Given the proliferation and the growing complexity of performance measurement initiatives in many health systems, the Netherlands and Ontario, Canada expressed interests in cross-national comparisons in an effort to promote knowledge transfer and best practise. To support this cross-national learning, a study was undertaken to compare health system performance approaches in The Netherlands with Ontario, Canada. Methods We explored the performance assessment framework and system of each constituency, the embeddedness of performance data in management and policy processes, and the interrelationships between the frameworks. Methods used included analysing governmental strategic planning and policy documents, literature and internet searches, comparative descriptive tables, and schematics. Data collection and analysis took place in Ontario and The Netherlands. A workshop to validate and discuss the findings was conducted in Toronto, adding important insights to the study. Results Both Ontario and The Netherlands conceive health system performance within supportive frameworks. However they differ in their assessment approaches. Ontario's Scorecard links performance measurement with strategy, aimed at health system integration. The Dutch Health Care Performance Report (Zorgbalans) does not explicitly link performance with strategy, and focuses on the technical quality of healthcare by measuring dimensions of quality, access, and cost against healthcare needs. A backbone 'five diamond' framework maps both frameworks and articulates the interrelations and overlap between their goals, themes, dimensions and indicators. The workshop yielded more contextual insights and further validated the comparative values of each constituency's performance assessment system. Conclusion To compare the health system performance approaches between The Netherlands and Ontario, Canada, several important conceptual and contextual issues must be addressed, before even attempting any future content comparisons and benchmarking. Such issues would lend relevant interpretational credibility to international comparative assessments of the two health systems. PMID:17319947

  13. Deep facial analysis: A new phase I epilepsy evaluation using computer vision.

    PubMed

    Ahmedt-Aristizabal, David; Fookes, Clinton; Nguyen, Kien; Denman, Simon; Sridharan, Sridha; Dionisio, Sasha

    2018-05-01

    Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.

    PubMed

    Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang

    2017-01-01

    Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.

  15. Testing for carryover effects after cessation of treatments: a design approach.

    PubMed

    Sturdevant, S Gwynn; Lumley, Thomas

    2016-08-02

    Recently, trials addressing noisy measurements with diagnosis occurring by exceeding thresholds (such as diabetes and hypertension) have been published which attempt to measure carryover - the impact that treatment has on an outcome after cessation. The design of these trials has been criticised and simulations have been conducted which suggest that the parallel-designs used are not adequate to test this hypothesis; two solutions are that either a differing parallel-design or a cross-over design could allow for diagnosis of carryover. We undertook a systematic simulation study to determine the ability of a cross-over or a parallel-group trial design to detect carryover effects on incident hypertension in a population with prehypertension. We simulated blood pressure and focused on varying criteria to diagnose systolic hypertension. Using the difference in cumulative incidence hypertension to analyse parallel-group or cross-over trials resulted in none of the designs having acceptable Type I error rate. Under the null hypothesis of no carryover the difference is well above the nominal 5 % error rate. When a treatment is effective during the intervention period, reliable testing for a carryover effect is difficult. Neither parallel-group nor cross-over designs using the difference in cumulative incidence appear to be a feasible approach. Future trials should ensure their design and analysis is validated by simulation.

  16. CrossLink: a novel method for cross-condition classification of cancer subtypes.

    PubMed

    Ma, Chifeng; Sastry, Konduru S; Flore, Mario; Gehani, Salah; Al-Bozom, Issam; Feng, Yusheng; Serpedin, Erchin; Chouchane, Lotfi; Chen, Yidong; Huang, Yufei

    2016-08-22

    We considered the prediction of cancer classes (e.g. subtypes) using patient gene expression profiles that contain both systematic and condition-specific biases when compared with the training reference dataset. The conventional normalization-based approaches cannot guarantee that the gene signatures in the reference and prediction datasets always have the same distribution for all different conditions as the class-specific gene signatures change with the condition. Therefore, the trained classifier would work well under one condition but not under another. To address the problem of current normalization approaches, we propose a novel algorithm called CrossLink (CL). CL recognizes that there is no universal, condition-independent normalization mapping of signatures. In contrast, it exploits the fact that the signature is unique to its associated class under any condition and thus employs an unsupervised clustering algorithm to discover this unique signature. We assessed the performance of CL for cross-condition predictions of PAM50 subtypes of breast cancer by using a simulated dataset modeled after TCGA BRCA tumor samples with a cross-validation scheme, and datasets with known and unknown PAM50 classification. CL achieved prediction accuracy >73 %, highest among other methods we evaluated. We also applied the algorithm to a set of breast cancer tumors derived from Arabic population to assign a PAM50 classification to each tumor based on their gene expression profiles. A novel algorithm CrossLink for cross-condition prediction of cancer classes was proposed. In all test datasets, CL showed robust and consistent improvement in prediction performance over other state-of-the-art normalization and classification algorithms.

  17. Management Approaches to Stomal and Peristomal Complications: A Narrative Descriptive Study.

    PubMed

    Beitz, Janice M; Colwell, Janice C

    2016-01-01

    The purpose of this study was to identify optimal interventions for selected complications based on WOC nurse experts' judgment/expertise. A cross-sectional quantitative descriptive design with qualitative, narrative-type components was used for this study. Following validation rating of appropriateness of interventions and quantitative rankings of first-, second-, and third-line approaches, participants provided substantive handwritten narrative comments about listed interventions. Comments were organized and prioritized using frequency count. Narrative comments reflected the quantitative rankings of efficacy of approaches. Clinicians offered further specific suggestions regarding product use and progression of care for selected complications. Narrative analysis using descriptive quantitative frequency count supported the rankings of most preferred treatments of selected stomal and peristomal complications. Findings add to the previous research on prioritized approaches and evidence-based practice in ostomy care.

  18. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  19. A Hybrid Approach Using Case-Based Reasoning and Rule-Based Reasoning to Support Cancer Diagnosis: A Pilot Study.

    PubMed

    Saraiva, Renata M; Bezerra, João; Perkusich, Mirko; Almeida, Hyggo; Siebra, Clauirton

    2015-01-01

    Recently there has been an increasing interest in applying information technology to support the diagnosis of diseases such as cancer. In this paper, we present a hybrid approach using case-based reasoning (CBR) and rule-based reasoning (RBR) to support cancer diagnosis. We used symptoms, signs, and personal information from patients as inputs to our model. To form specialized diagnoses, we used rules to define the input factors' importance according to the patient's characteristics. The model's output presents the probability of the patient having a type of cancer. To carry out this research, we had the approval of the ethics committee at Napoleão Laureano Hospital, in João Pessoa, Brazil. To define our model's cases, we collected real patient data at Napoleão Laureano Hospital. To define our model's rules and weights, we researched specialized literature and interviewed health professional. To validate our model, we used K-fold cross validation with the data collected at Napoleão Laureano Hospital. The results showed that our approach is an effective CBR system to diagnose cancer.

  20. New approaches to provide feedback from nuclear and covariance data adjustment for effective improvement of evaluated nuclear data files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmiotti, Giuseppe; Salvatores, Massimo; Hursin, Mathieu

    2016-11-01

    A critical examination of the role of uncertainty assessment, target accuracies, role of integral experiment for validation and, consequently, of data adjustments methods is underway since several years at OECD-NEA, the objective being to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and experimentalists in order to improve without ambiguities the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications and to meet new requirements and constraints for innovative reactor and fuel cycle system design. An approach will bemore » described that expands as much as possible the use in the adjustment procedure of selected integral experiments that provide information on “elementary” phenomena, on separated individual physics effects related to specific isotopes or on specific energy ranges. An application to a large experimental data base has been performed and the results are discussed in the perspective of new evaluation projects like the CIELO initiative.« less

  1. New approaches to provide feedback from nuclear and covariance data adjustment for effective improvement of evaluated nuclear data files

    NASA Astrophysics Data System (ADS)

    Palmiotti, Giuseppe; Salvatores, Massimo; Hursin, Mathieu; Kodeli, Ivo; Gabrielli, Fabrizio; Hummel, Andrew

    2017-09-01

    A critical examination of the role of uncertainty assessment, target accuracies, role of integral experiment for validation and, consequently, of data adjustments methods is underway since several years at OECD-NEA, the objective being to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and experimentalists in order to improve without ambiguities the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications and to meet new requirements and constraints for innovative reactor and fuel cycle system design. An approach will be described that expands as much as possible the use in the adjustment procedure of selected integral experiments that provide information on "elementary" phenomena, on separated individual physics effects related to specific isotopes or on specific energy ranges. An application to a large experimental data base has been performed and the results are discussed in the perspective of new evaluation projects like the CIELO initiative.

  2. Bayesian approach to transforming public gene expression repositories into disease diagnosis databases.

    PubMed

    Huang, Haiyan; Liu, Chun-Chi; Zhou, Xianghong Jasmine

    2010-04-13

    The rapid accumulation of gene expression data has offered unprecedented opportunities to study human diseases. The National Center for Biotechnology Information Gene Expression Omnibus is currently the largest database that systematically documents the genome-wide molecular basis of diseases. However, thus far, this resource has been far from fully utilized. This paper describes the first study to transform public gene expression repositories into an automated disease diagnosis database. Particularly, we have developed a systematic framework, including a two-stage Bayesian learning approach, to achieve the diagnosis of one or multiple diseases for a query expression profile along a hierarchical disease taxonomy. Our approach, including standardizing cross-platform gene expression data and heterogeneous disease annotations, allows analyzing both sources of information in a unified probabilistic system. A high level of overall diagnostic accuracy was shown by cross validation. It was also demonstrated that the power of our method can increase significantly with the continued growth of public gene expression repositories. Finally, we showed how our disease diagnosis system can be used to characterize complex phenotypes and to construct a disease-drug connectivity map.

  3. Prediction of Patient-Controlled Analgesic Consumption: A Multimodel Regression Tree Approach.

    PubMed

    Hu, Yuh-Jyh; Ku, Tien-Hsiung; Yang, Yu-Hung; Shen, Jia-Ying

    2018-01-01

    Several factors contribute to individual variability in postoperative pain, therefore, individuals consume postoperative analgesics at different rates. Although many statistical studies have analyzed postoperative pain and analgesic consumption, most have identified only the correlation and have not subjected the statistical model to further tests in order to evaluate its predictive accuracy. In this study involving 3052 patients, a multistrategy computational approach was developed for analgesic consumption prediction. This approach uses data on patient-controlled analgesia demand behavior over time and combines clustering, classification, and regression to mitigate the limitations of current statistical models. Cross-validation results indicated that the proposed approach significantly outperforms various existing regression methods. Moreover, a comparison between the predictions by anesthesiologists and medical specialists and those of the computational approach for an independent test data set of 60 patients further evidenced the superiority of the computational approach in predicting analgesic consumption because it produced markedly lower root mean squared errors.

  4. Assessing patient-centered care: one approach to health disparities education.

    PubMed

    Wilkerson, LuAnn; Fung, Cha-Chi; May, Win; Elliott, Donna

    2010-05-01

    Patient-centered care has been described as one approach to cultural competency education that could reduce racial and ethnic health disparities by preparing providers to deliver care that is respectful and responsive to the preferences of each patient. In order to evaluate the effectiveness of a curriculum in teaching patient-centered care (PCC) behaviors to medical students, we drew on the work of Kleinman, Eisenberg, and Good to develop a scale that could be embedded across cases in an objective structured clinical examination (OSCE). To compare the reliability, validity, and feasibility of an embedded patient-centered care scale with the use of a single culturally challenging case in measuring students' use of PCC behaviors as part of a comprehensive OSCE. A total of 322 students from two California medical schools participated in the OSCE as beginning seniors. Cronbach's alpha was used to assess the internal consistency of each approach. Construct validity was addressed by establishing convergent and divergent validity using the cultural challenge case total score and OSCE component scores. Feasibility assessment considered cost and training needs for the standardized patients (SPs). Medical students demonstrated a moderate level of patient-centered skill (mean = 63%, SD = 11%). The PCC Scale demonstrated an acceptable level of internal consistency (alpha = 0.68) over the single case scale (alpha = 0.60). Both convergent and divergent validities were established through low to moderate correlation coefficients. The insertion of PCC items across multiple cases in a comprehensive OSCE can provide a reliable estimate of students' use of PCC behaviors without incurring extra costs associated with implementing a special cross-cultural OSCE. This approach is particularly feasible when an OSCE is already part of the standard assessment of clinical skills. Reliability may be increased with an additional investment in SP training.

  5. Guidelines To Validate Control of Cross-Contamination during Washing of Fresh-Cut Leafy Vegetables.

    PubMed

    Gombas, D; Luo, Y; Brennan, J; Shergill, G; Petran, R; Walsh, R; Hau, H; Khurana, K; Zomorodi, B; Rosen, J; Varley, R; Deng, K

    2017-02-01

    The U.S. Food and Drug Administration requires food processors to implement and validate processes that will result in significantly minimizing or preventing the occurrence of hazards that are reasonably foreseeable in food production. During production of fresh-cut leafy vegetables, microbial contamination that may be present on the product can spread throughout the production batch when the product is washed, thus increasing the risk of illnesses. The use of antimicrobials in the wash water is a critical step in preventing such water-mediated cross-contamination; however, many factors can affect antimicrobial efficacy in the production of fresh-cut leafy vegetables, and the procedures for validating this key preventive control have not been articulated. Producers may consider three options for validating antimicrobial washing as a preventive control for cross-contamination. Option 1 involves the use of a surrogate for the microbial hazard and the demonstration that cross-contamination is prevented by the antimicrobial wash. Option 2 involves the use of antimicrobial sensors and the demonstration that a critical antimicrobial level is maintained during worst-case operating conditions. Option 3 validates the placement of the sensors in the processing equipment with the demonstration that a critical antimicrobial level is maintained at all locations, regardless of operating conditions. These validation options developed for fresh-cut leafy vegetables may serve as examples for validating processes that prevent cross-contamination during washing of other fresh produce commodities.

  6. Assessment and measurement of patient-centered medical home implementation: the BCBSM experience.

    PubMed

    Alexander, Jeffrey A; Paustian, Michael; Wise, Christopher G; Green, Lee A; Fetters, Michael D; Mason, Margaret; El Reda, Darline K

    2013-01-01

    Our goal was to describe an approach to patient-centered medical home (PCMH) measurement based on delineating the desired properties of the measurement relative to assumptions about the PCMH and the uses of the measure by Blue Cross Blue Shield of Michigan (BCBSM) and health services researchers. We developed and validated an approach to assess 13 functional domains of PCMHs and 128 capabilities within those domains. A measure of PCMH implementation was constructed using data from the validated self-assessment and then tested on a large sample of primary care practices in Michigan. Our results suggest that the measure adequately addresses the specific requirements and assumptions underlying the BCBSM PCMH program-ability to assess change in level of implementation; ability to compare across practices regardless of size, affiliation, or payer mix; and ability to assess implementation of the PCMH through different sequencing of capabilities and domains. Our experience illustrates that approaches to measuring PCMH should be driven by the measures' intended use(s) and users, and that a one-size-fits-all approach may not be appropriate. Rather than promoting the BCBSM PCMH measure as the gold standard, our study highlights the challenges, strengths, and limitations of developing a standardized approach to PCMH measurement.

  7. Cross-cultural validation of Cancer Communication Assessment Tool in Korea.

    PubMed

    Shin, Dong Wook; Shin, Jooyeon; Kim, So Young; Park, Boram; Yang, Hyung-Kook; Cho, Juhee; Lee, Eun Sook; Kim, Jong Heun; Park, Jong-Hyock

    2015-02-01

    Communication between cancer patients and caregivers is often suboptimal. The Cancer Communication Assessment Tool for Patient and Families (CCAT-PF) is a unique tool developed to measure congruence in patient-family caregiver communication employing a dyadic approach. We aimed to examine the cross-cultural applicability of the CCAT in the Korean healthcare setting. Linguistic validation of the CCAT-PF was performed through a standard forward-backward translation process. Psychometric validation was performed with 990 patient-caregiver dyads recruited from 10 cancer centers. Mean scores of CCAT-P and CCAT-F were similar at 44.8 for both scales. Mean CCAT-PF score was 23.7 (8.66). Concordance of each items between patients and caregivers was low (weighted kappa values <0.20 for all items and Spearman's rho <0.18 for scale scores). Scale scores did not differ significantly across a variety of cancer types and stages. The CCAT-P or CCAT-F score was weakly associated with mental health and quality of life outcomes. The CCAT-PF was correlated weakly with both patient-perceived and caregiver-perceived family avoidance of cancer care scales. The CCAT-PF Korean version showed similar psychometric properties to the original English version in the assessment of communication congruence between cancer patient and family caregivers. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification

    PubMed Central

    Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.

    2014-01-01

    Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155

  9. Factorial validity of the Job Expectations Questionnaire in a sample of Mexican workers.

    PubMed

    Villa-George, Fabiola Itzel; Moreno-Jiménez, Bernardo; Rodríguez-Muñoz, Alfredo; Villalpando Uribe, Jessica

    2011-11-01

    The aim of this study was to examine the factorial validity of the Job Expectations Questionnaire (Cuestionario de Expectativas Laborales CEL) in a sample of Mexican workers. Following a cross validation approach, two samples were used in the study. The first sample consisted of 380 professionals who mainly performed administrative work in the Health Services in Puebla-Mexico. The second sample comprised 400 health professionals from the Hospital de la Mujer in Puebla-Mexico. Exploratory factor analysis yielded a three-factor solution, accounting for 51.8% of the variance. The results of confirmatory factorial analysis indicate that the three-factor model provided the best fit with the data (CFI = .96, GFI = .95, NNFI = .95, RMSEA = .04), maintaining the structure with 12 items. The reliability of the questionnaire and the diverse subscales showed high internal consistency. Significant correlations were found between job expectations and autonomy, vigor, dedication, and absorption, providing evidence of its construct validity. The evaluation of the psychometric qualities confirms this questionnaire as a valid and specific instrument to measure job expectations.

  10. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  11. Global relative quantification with liquid chromatography-matrix-assisted laser desorption ionization time-of-flight (LC-MALDI-TOF)--cross-validation with LTQ-Orbitrap proves reliability and reveals complementary ionization preferences.

    PubMed

    Hessling, Bernd; Büttner, Knut; Hecker, Michael; Becher, Dörte

    2013-10-01

    Quantitative LC-MALDI is an underrepresented method, especially in large-scale experiments. The additional fractionation step that is needed for most MALDI-TOF-TOF instruments, the comparatively long analysis time, and the very limited number of established software tools for the data analysis render LC-MALDI a niche application for large quantitative analyses beside the widespread LC-electrospray ionization workflows. Here, we used LC-MALDI in a relative quantification analysis of Staphylococcus aureus for the first time on a proteome-wide scale. Samples were analyzed in parallel with an LTQ-Orbitrap, which allowed cross-validation with a well-established workflow. With nearly 850 proteins identified in the cytosolic fraction and quantitative data for more than 550 proteins obtained with the MASCOT Distiller software, we were able to prove that LC-MALDI is able to process highly complex samples. The good correlation of quantities determined via this method and the LTQ-Orbitrap workflow confirmed the high reliability of our LC-MALDI approach for global quantification analysis. Because the existing literature reports differences for MALDI and electrospray ionization preferences and the respective experimental work was limited by technical or methodological constraints, we systematically compared biochemical attributes of peptides identified with either instrument. This genome-wide, comprehensive study revealed biases toward certain peptide properties for both MALDI-TOF-TOF- and LTQ-Orbitrap-based approaches. These biases are based on almost 13,000 peptides and result in a general complementarity of the two approaches that should be exploited in future experiments.

  12. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  13. Theoretical model for plasmonic photothermal response of gold nanostructures solutions

    NASA Astrophysics Data System (ADS)

    Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.

    2018-03-01

    Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.

  14. An Informatics Based Approach to Reduce the Grain Size of Cast Hadfield Steel

    NASA Astrophysics Data System (ADS)

    Dey, Swati; Pathak, Shankha; Sheoran, Sumit; Kela, Damodar H.; Datta, Shubhabrata

    2016-04-01

    Materials Informatics concept using computational intelligence based approaches are employed to bring out the significant alloying additions to achieve grain refinement in cast Hadfield steel. Castings of Hadfield steels used for railway crossings, requires fine grained austenitic structure. Maintaining proper grain size of this component is very crucial in order to achieve the desired properties and service life. This work studies the important variables affecting the grain size of such steels which includes the compositional and processing variables. The computational findings and prior knowledge is used to design the alloy, which is subjected to a few trials to validate the findings.

  15. Hypnosis, Ericksonian hypnotherapy, and Aikido.

    PubMed

    Windle, R; Samko, M

    1992-04-01

    Several key Ericksonian concepts find cross-cultural validation and practical application in the Japanese martial art of Aikido. The Aikido psychophysiological state of centering shares several important attributes with the trance state, particularly in the relational aspects of shared trance. In Aikido methodology for dealing with others, blending is an almost exact parallel to Ericksonian utilization. The Aikido view of resistance offers an increased understanding of strategic/Ericksonian approaches. Therapist training may be enhanced by combining Aikido principles with traditional methods.

  16. Comparison of the LLNL ALE3D and AKTS Thermal Safety Computer Codes for Calculating Times to Explosion in ODTX and STEX Thermal Cookoff Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K

    2006-04-05

    Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTSmore » code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.« less

  17. Cross-cultural acceptability and utility of the strengths and difficulties questionnaire: views of families.

    PubMed

    Kersten, Paula; Dudley, Margaret; Nayar, Shoba; Elder, Hinemoa; Robertson, Heather; Tauroa, Robyn; McPherson, Kathryn M

    2016-10-12

    Screening children for behavioural difficulties requires the use of a tool that is culturally valid. We explored the cross-cultural acceptability and utility of the Strengths and Difficulties Questionnaire for pre-school children (aged 3-5) as perceived by families in New Zealand. A qualitative interpretive descriptive study (focus groups and interviews) in which 65 participants from five key ethnic groups (New Zealand European, Māori, Pacific, Asian and other immigrant parents) took part. Thematic analysis using an inductive approach, in which the themes identified are strongly linked to the data, was employed. Many parents reported they were unclear about the purpose of the tool, affecting its perceived value. Participants reported not understanding the context in which they should consider the questions and had difficulty understanding some questions and response options. Māori parents generally did not support the questionnaire based approach, preferring face to face interaction. Parents from Māori, Pacific Island, Asian, and new immigrant groups reported the tool lacked explicit consideration of children in their cultural context. Parents discussed the importance of timing and multiple perspectives when interpreting scores from the tool. In summary, this study posed a number of challenges to the use of the Strengths and Difficulties Questionnaire in New Zealand. Further work is required to develop a tool that is culturally appropriate with good content validity.

  18. Cross-cultural adaptation, validity, and reliability of the Parenting Styles and Dimensions Questionnaire - Short Version (PSDQ) for use in Brazil.

    PubMed

    Oliveira, Thaís D; Costa, Danielle de S; Albuquerque, Maicon R; Malloy-Diniz, Leandro F; Miranda, Débora M; de Paula, Jonas J

    2018-06-11

    The Parenting Styles and Dimensions Questionnaire (PSDQ) is used worldwide to assess three styles (authoritative, authoritarian, and permissive) and seven dimensions of parenting. In this study, we adapted the short version of the PSDQ for use in Brazil and investigated its validity and reliability. Participants were 451 mothers of children aged 3 to 18 years, though sample size varied with analyses. The translation and adaptation of the PSDQ followed a rigorous methodological approach. Then, we investigated the content, criterion, and construct validity of the adapted instrument. The scale content validity index (S-CVI) was considered adequate (0.97). There was evidence of internal validity, with the PSDQ dimensions showing strong correlations with their higher-order parenting styles. Confirmatory factor analysis endorsed the three-factor, second-order solution (i.e., three styles consisting of seven dimensions). The PSDQ showed convergent validity with the validated Brazilian version of the Parenting Styles Inventory (Inventário de Estilos Parentais - IEP), as well as external validity, as it was associated with several instruments measuring sociodemographic and behavioral/emotional-problem variables. The PSDQ is an effective and reliable psychometric instrument to assess childrearing strategies according to Baumrind's model of parenting styles.

  19. Concussion classification via deep learning using whole-brain white matter fiber strains

    PubMed Central

    Cai, Yunliang; Wu, Shaoju; Zhao, Wei; Li, Zhigang; Wu, Zheyang

    2018-01-01

    Developing an accurate and reliable injury predictor is central to the biomechanical studies of traumatic brain injury. State-of-the-art efforts continue to rely on empirical, scalar metrics based on kinematics or model-estimated tissue responses explicitly pre-defined in a specific brain region of interest. They could suffer from loss of information. A single training dataset has also been used to evaluate performance but without cross-validation. In this study, we developed a deep learning approach for concussion classification using implicit features of the entire voxel-wise white matter fiber strains. Using reconstructed American National Football League (NFL) injury cases, leave-one-out cross-validation was employed to objectively compare injury prediction performances against two baseline machine learning classifiers (support vector machine (SVM) and random forest (RF)) and four scalar metrics via univariate logistic regression (Brain Injury Criterion (BrIC), cumulative strain damage measure of the whole brain (CSDM-WB) and the corpus callosum (CSDM-CC), and peak fiber strain in the CC). Feature-based machine learning classifiers including deep learning, SVM, and RF consistently outperformed all scalar injury metrics across all performance categories (e.g., leave-one-out accuracy of 0.828–0.862 vs. 0.690–0.776, and .632+ error of 0.148–0.176 vs. 0.207–0.292). Further, deep learning achieved the best cross-validation accuracy, sensitivity, AUC, and .632+ error. These findings demonstrate the superior performances of deep learning in concussion prediction and suggest its promise for future applications in biomechanical investigations of traumatic brain injury. PMID:29795640

  20. Concussion classification via deep learning using whole-brain white matter fiber strains.

    PubMed

    Cai, Yunliang; Wu, Shaoju; Zhao, Wei; Li, Zhigang; Wu, Zheyang; Ji, Songbai

    2018-01-01

    Developing an accurate and reliable injury predictor is central to the biomechanical studies of traumatic brain injury. State-of-the-art efforts continue to rely on empirical, scalar metrics based on kinematics or model-estimated tissue responses explicitly pre-defined in a specific brain region of interest. They could suffer from loss of information. A single training dataset has also been used to evaluate performance but without cross-validation. In this study, we developed a deep learning approach for concussion classification using implicit features of the entire voxel-wise white matter fiber strains. Using reconstructed American National Football League (NFL) injury cases, leave-one-out cross-validation was employed to objectively compare injury prediction performances against two baseline machine learning classifiers (support vector machine (SVM) and random forest (RF)) and four scalar metrics via univariate logistic regression (Brain Injury Criterion (BrIC), cumulative strain damage measure of the whole brain (CSDM-WB) and the corpus callosum (CSDM-CC), and peak fiber strain in the CC). Feature-based machine learning classifiers including deep learning, SVM, and RF consistently outperformed all scalar injury metrics across all performance categories (e.g., leave-one-out accuracy of 0.828-0.862 vs. 0.690-0.776, and .632+ error of 0.148-0.176 vs. 0.207-0.292). Further, deep learning achieved the best cross-validation accuracy, sensitivity, AUC, and .632+ error. These findings demonstrate the superior performances of deep learning in concussion prediction and suggest its promise for future applications in biomechanical investigations of traumatic brain injury.

  1. Rapid determination of Swiss cheese composition by Fourier transform infrared/attenuated total reflectance spectroscopy.

    PubMed

    Rodriguez-Saona, L E; Koca, N; Harper, W J; Alvarez, V B

    2006-05-01

    There is a need for rapid and simple techniques that can be used to predict the quality of cheese. The aim of this research was to develop a simple and rapid screening tool for monitoring Swiss cheese composition by using Fourier transform infrared spectroscopy. Twenty Swiss cheese samples from different manufacturers and degree of maturity were evaluated. Direct measurements of Swiss cheese slices (approximately 0.5 g) were made using a MIRacle 3-reflection diamond attenuated total reflectance (ATR) accessory. Reference methods for moisture (vacuum oven), protein content (Kjeldahl), and fat (Babcock) were used. Calibration models were developed based on a cross-validated (leave-one-out approach) partial least squares regression. The information-rich infrared spectral range for Swiss cheese samples was from 3,000 to 2,800 cm(-1) and 1,800 to 900 cm(-1). The performance statistics for cross-validated models gave estimates for standard error of cross-validation of 0.45, 0.25, and 0.21% for moisture, protein, and fat respectively, and correlation coefficients r > 0.96. Furthermore, the ATR infrared protocol allowed for the classification of cheeses according to manufacturer and aging based on unique spectral information, especially of carbonyl groups, probably due to their distinctive lipid composition. Attenuated total reflectance infrared spectroscopy allowed for the rapid (approximately 3-min analysis time) and accurate analysis of the composition of Swiss cheese. This technique could contribute to the development of simple and rapid protocols for monitoring complex biochemical changes, and predicting the final quality of the cheese.

  2. Fast scattering simulation tool for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  3. A Resource of Quantitative Functional Annotation for Homo sapiens Genes.

    PubMed

    Taşan, Murat; Drabkin, Harold J; Beaver, John E; Chua, Hon Nian; Dunham, Julie; Tian, Weidong; Blake, Judith A; Roth, Frederick P

    2012-02-01

    The body of human genomic and proteomic evidence continues to grow at ever-increasing rates, while annotation efforts struggle to keep pace. A surprisingly small fraction of human genes have clear, documented associations with specific functions, and new functions continue to be found for characterized genes. Here we assembled an integrated collection of diverse genomic and proteomic data for 21,341 human genes and make quantitative associations of each to 4333 Gene Ontology terms. We combined guilt-by-profiling and guilt-by-association approaches to exploit features unique to the data types. Performance was evaluated by cross-validation, prospective validation, and by manual evaluation with the biological literature. Functional-linkage networks were also constructed, and their utility was demonstrated by identifying candidate genes related to a glioma FLN using a seed network from genome-wide association studies. Our annotations are presented-alongside existing validated annotations-in a publicly accessible and searchable web interface.

  4. On the accuracy of aerosol photoacoustic spectrometer calibrations using absorption by ozone

    NASA Astrophysics Data System (ADS)

    Davies, Nicholas W.; Cotterell, Michael I.; Fox, Cathryn; Szpek, Kate; Haywood, Jim M.; Langridge, Justin M.

    2018-04-01

    In recent years, photoacoustic spectroscopy has emerged as an invaluable tool for the accurate measurement of light absorption by atmospheric aerosol. Photoacoustic instruments require calibration, which can be achieved by measuring the photoacoustic signal generated by known quantities of gaseous ozone. Recent work has questioned the validity of this approach at short visible wavelengths (404 nm), indicating systematic calibration errors of the order of a factor of 2. We revisit this result and test the validity of the ozone calibration method using a suite of multipass photoacoustic cells operating at wavelengths 405, 514 and 658 nm. Using aerosolised nigrosin with mobility-selected diameters in the range 250-425 nm, we demonstrate excellent agreement between measured and modelled ensemble absorption cross sections at all wavelengths, thus demonstrating the validity of the ozone-based calibration method for aerosol photoacoustic spectroscopy at visible wavelengths.

  5. [Not Available].

    PubMed

    Brosseau, Lucie; Laroche, Chantal; Sutton, Anne; Guitard, Paulette; King, Judy; Poitras, Stéphane; Casimiro, Lynn; Tremblay, Manon; Cardinal, Dominique; Cavallo, Sabrina; Laferrière, Lucie; Grisé, Isabelle; Marshall, Lisa; Smith, Jacky R; Lagacé, Josée; Pharand, Denyse; Galipeau, Roseline; Toupin-April, Karine; Loew, Laurianne; Demers, Catrine; Sauvé-Schenk, Katrine; Paquet, Nicole; Savard, Jacinthe; Tourigny, Jocelyne; Vaillancourt, Véronique

    2015-08-01

    To prepare a Canadian French translation of the PEDro Scale under the proposed name l'Échelle PEDro, and to examine the validity of its content. A modified approach of Vallerand's cross-cultural validation methodology was used, beginning with a parallel back-translation of the PEDro scale by both professional translators and clinical researchers. These versions were reviewed by an initial panel of experts (P1), who then created the first experimental version of l'Échelle PEDro. This version was evaluated by a second panel of experts (P2). Finally, 32 clinical researchers evaluated the second experimental version of l'Échelle PEDro, using a 5-point clarity scale, and suggested final modifications. The various items on the final version of l'Échelle PEDro show a high degree of clarity (from 4.0 to 4.7 on the 5-point scale). The four rigorous steps of the translation process have produced a valid Canadian French version of the PEDro scale.

  6. Assessment of patient's experiences across the interface between primary and secondary care: Consumer Quality Index Continuum of care.

    PubMed

    Berendsen, Annette J; Groenier, Klaas H; de Jong, G Majella; Meyboom-de Jong, Betty; van der Veen, Willem Jan; Dekker, Janny; de Waal, Margot W M; Schuling, Jan

    2009-10-01

    Development and validation of a questionnaire that measures patients' experiences of collaboration between general practitioners (GPs) and specialists. A questionnaire was developed using the method of the consumer quality index and validated in a cross-sectional study among a random sample of patients referred to medical specialists in the Netherlands. Validation included factor analysis, ascertain internal consistency, and the discriminative ability. The response rate was 65% (1404 patients). Exploratory factor analysis indicated that four domains could be distinguished (i.e. GP Approach; GP Referral; Specialist; Collaboration). Cronbach's alpha coefficients ranged from 0.51 to 0.93 indicating sufficient internal consistency to make comparison of groups of respondents possible. The Pearson correlation coefficients between the domains were <0.4, except between the domains GP Approach and GP Referral. All domains clearly produced discriminating scores for groups with different characteristics. The Consumer Quality Index (CQ-index) Continuum of Care can be a useful instrument to assess aspects of the collaboration between GPs and specialists from patients' perspective. It can be used to give feedback to both medical professionals and policy makers. Such feedback creates an opportunity for implementing specific improvements and evaluating quality improvement projects. 2009 Elsevier Ireland Ltd.

  7. A metabolic fingerprinting approach based on selected ion flow tube mass spectrometry (SIFT-MS) and chemometrics: A reliable tool for Mediterranean origin-labeled olive oils authentication.

    PubMed

    Bajoub, Aadil; Medina-Rodríguez, Santiago; Ajal, El Amine; Cuadros-Rodríguez, Luis; Monasterio, Romina Paula; Vercammen, Joeri; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría

    2018-04-01

    Selected Ion flow tube mass spectrometry (SIFT-MS) in combination with chemometrics was used to authenticate the geographical origin of Mediterranean virgin olive oils (VOOs) produced under geographical origin labels. In particular, 130 oil samples from six different Mediterranean regions (Kalamata (Greece); Toscana (Italy); Meknès and Tyout (Morocco); and Priego de Córdoba and Baena (Spain)) were considered. The headspace volatile fingerprints were measured by SIFT-MS in full scan with H 3 O + , NO + and O 2 + as precursor ions and the results were subjected to chemometric treatments. Principal Component Analysis (PCA) was used for preliminary multivariate data analysis and Partial Least Squares-Discriminant Analysis (PLS-DA) was applied to build different models (considering the three reagent ions) to classify samples according to the country of origin and regions (within the same country). The multi-class PLS-DA models showed very good performance in terms of fitting accuracy (98.90-100%) and prediction accuracy (96.70-100% accuracy for cross validation and 97.30-100% accuracy for external validation (test set)). Considering the two-class PLS-DA models, the one for the Spanish samples showed 100% sensitivity, specificity and accuracy in calibration, cross validation and external validation; the model for Moroccan oils also showed very satisfactory results (with perfect scores for almost every parameter in all the cases). Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Detrended cross-correlation coefficient: Application to predict apoptosis protein subcellular localization.

    PubMed

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2016-12-01

    Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Validation of standard operating procedures in a multicenter retrospective study to identify -omics biomarkers for chronic low back pain.

    PubMed

    Dagostino, Concetta; De Gregori, Manuela; Gieger, Christian; Manz, Judith; Gudelj, Ivan; Lauc, Gordan; Divizia, Laura; Wang, Wei; Sim, Moira; Pemberton, Iain K; MacDougall, Jane; Williams, Frances; Van Zundert, Jan; Primorac, Dragan; Aulchenko, Yurii; Kapural, Leonardo; Allegri, Massimo

    2017-01-01

    Chronic low back pain (CLBP) is one of the most common medical conditions, ranking as the greatest contributor to global disability and accounting for huge societal costs based on the Global Burden of Disease 2010 study. Large genetic and -omics studies provide a promising avenue for the screening, development and validation of biomarkers useful for personalized diagnosis and treatment (precision medicine). Multicentre studies are needed for such an effort, and a standardized and homogeneous approach is vital for recruitment of large numbers of participants among different centres (clinical and laboratories) to obtain robust and reproducible results. To date, no validated standard operating procedures (SOPs) for genetic/-omics studies in chronic pain have been developed. In this study, we validated an SOP model that will be used in the multicentre (5 centres) retrospective "PainOmics" study, funded by the European Community in the 7th Framework Programme, which aims to develop new biomarkers for CLBP through three different -omics approaches: genomics, glycomics and activomics. The SOPs describe the specific procedures for (1) blood collection, (2) sample processing and storage, (3) shipping details and (4) cross-check testing and validation before assays that all the centres involved in the study have to follow. Multivariate analysis revealed the absolute specificity and homogeneity of the samples collected by the five centres for all genetics, glycomics and activomics analyses. The SOPs used in our multicenter study have been validated. Hence, they could represent an innovative tool for the correct management and collection of reliable samples in other large-omics-based multicenter studies.

  10. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  11. In silico prediction of ROCK II inhibitors by different classification approaches.

    PubMed

    Cai, Chuipu; Wu, Qihui; Luo, Yunxia; Ma, Huili; Shen, Jiangang; Zhang, Yongbin; Yang, Lei; Chen, Yunbo; Wen, Zehuai; Wang, Qi

    2017-11-01

    ROCK II is an important pharmacological target linked to central nervous system disorders such as Alzheimer's disease. The purpose of this research is to generate ROCK II inhibitor prediction models by machine learning approaches. Firstly, four sets of descriptors were calculated with MOE 2010 and PaDEL-Descriptor, and optimized by F-score and linear forward selection methods. In addition, four classification algorithms were used to initially build 16 classifiers with k-nearest neighbors [Formula: see text], naïve Bayes, Random forest, and support vector machine. Furthermore, three sets of structural fingerprint descriptors were introduced to enhance the predictive capacity of classifiers, which were assessed with fivefold cross-validation, test set validation and external test set validation. The best two models, MFK + MACCS and MLR + SubFP, have both MCC values of 0.925 for external test set. After that, a privileged substructure analysis was performed to reveal common chemical features of ROCK II inhibitors. Finally, binding modes were analyzed to identify relationships between molecular descriptors and activity, while main interactions were revealed by comparing the docking interaction of the most potent and the weakest ROCK II inhibitors. To the best of our knowledge, this is the first report on ROCK II inhibitors utilizing machine learning approaches that provides a new method for discovering novel ROCK II inhibitors.

  12. Screening for Psychosocial Distress amongst War-Affected Children: Cross-Cultural Construct Validity of the CPDS

    ERIC Educational Resources Information Center

    Jordans, M. J. D.; Komproe, I. H.; Tol, W. A.; De Jong, J. T. V. M.

    2009-01-01

    Background: Large-scale psychosocial interventions in complex emergencies call for a screening procedure to identify individuals at risk. To date there are no screening instruments that are developed within low- and middle-income countries and validated for that purpose. The present study assesses the cross-cultural validity of the brief,…

  13. SERS quantitative urine creatinine measurement of human subject

    NASA Astrophysics Data System (ADS)

    Wang, Tsuei Lian; Chiang, Hui-hua K.; Lu, Hui-hsin; Hung, Yung-da

    2005-03-01

    SERS method for biomolecular analysis has several potentials and advantages over traditional biochemical approaches, including less specimen contact, non-destructive to specimen, and multiple components analysis. Urine is an easily available body fluid for monitoring the metabolites and renal function of human body. We developed surface-enhanced Raman scattering (SERS) technique using 50nm size gold colloidal particles for quantitative human urine creatinine measurements. This paper shows that SERS shifts of creatinine (104mg/dl) in artificial urine is from 1400cm-1 to 1500cm-1 which was analyzed for quantitative creatinine measurement. Ten human urine samples were obtained from ten healthy persons and analyzed by the SERS technique. Partial least square cross-validation (PLSCV) method was utilized to obtain the estimated creatinine concentration in clinically relevant (55.9mg/dl to 208mg/dl) concentration range. The root-mean square error of cross validation (RMSECV) is 26.1mg/dl. This research demonstrates the feasibility of using SERS for human subject urine creatinine detection, and establishes the SERS platform technique for bodily fluids measurement.

  14. Parkinson's disease detection based on dysphonia measurements

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2017-04-01

    Assessing dysphonic symptoms is a noninvasive and effective approach to detect Parkinson's disease (PD) in patients. The main purpose of this study is to investigate the effect of different dysphonia measurements on PD detection by support vector machine (SVM). Seven categories of dysphonia measurements are considered. Experimental results from ten-fold cross-validation technique demonstrate that vocal fundamental frequency statistics yield the highest accuracy of 88 % ± 0.04. When all dysphonia measurements are employed, the SVM classifier achieves 94 % ± 0.03 accuracy. A refinement of the original patterns space by removing dysphonia measurements with similar variation across healthy and PD subjects allows achieving 97.03 % ± 0.03 accuracy. The latter performance is larger than what is reported in the literature on the same dataset with ten-fold cross-validation technique. Finally, it was found that measures of ratio of noise to tonal components in the voice are the most suitable dysphonic symptoms to detect PD subjects as they achieve 99.64 % ± 0.01 specificity. This finding is highly promising for understanding PD symptoms.

  15. A cross-validation Delphi method approach to the diagnosis and treatment of personality disorders in older adults.

    PubMed

    Rosowsky, Erlene; Young, Alexander S; Malloy, Mary C; van Alphen, S P J; Ellison, James M

    2018-03-01

    The Delphi method is a consensus-building technique using expert opinion to formulate a shared framework for understanding a topic with limited empirical support. This cross-validation study replicates one completed in the Netherlands and Belgium, and explores US experts' views on the diagnosis and treatment of older adults with personality disorders (PD). Twenty-one geriatric PD experts participated in a Delphi survey addressing diagnosis and treatment of older adults with PD. The European survey was translated and administered electronically. First-round consensus was reached for 16 out of 18 items relevant to diagnosis and specific mental health programs for personality disorders in older adults. Experts agreed on the usefulness of establishing criteria for specific types of treatments. The majority of psychologists did not initially agree on the usefulness of pharmacotherapy. Expert consensus was reached following two subsequent rounds after clarification addressing medication use. Study results suggest consensus among regarding psychosocial treatments. Limited acceptance amongst US psychologists about the suitability of pharmacotherapy for late-life PDs contrasted with the views expressed by experts surveyed in Netherlands and Belgium studies.

  16. Quantifying Vocal Mimicry in the Greater Racket-Tailed Drongo: A Comparison of Automated Methods and Human Assessment

    PubMed Central

    Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini

    2014-01-01

    Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717

  17. An HMM model for coiled-coil domains and a comparison with PSSM-based predictions.

    PubMed

    Delorenzi, Mauro; Speed, Terry

    2002-04-01

    Large-scale sequence data require methods for the automated annotation of protein domains. Many of the predictive methods are based either on a Position Specific Scoring Matrix (PSSM) of fixed length or on a window-less Hidden Markov Model (HMM). The performance of the two approaches is tested for Coiled-Coil Domains (CCDs). The prediction of CCDs is used frequently, and its optimization seems worthwhile. We have conceived MARCOIL, an HMM for the recognition of proteins with a CCD on a genomic scale. A cross-validated study suggests that MARCOIL improves predictions compared to the traditional PSSM algorithm, especially for some protein families and for short CCDs. The study was designed to reveal differences inherent in the two methods. Potential confounding factors such as differences in the dimension of parameter space and in the parameter values were avoided by using the same amino acid propensities and by keeping the transition probabilities of the HMM constant during cross-validation. The prediction program and the databases are available at http://www.wehi.edu.au/bioweb/Mauro/Marcoil

  18. Numerical Analysis of a Pulse Detonation Cross Flow Heat Load Experiment

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Naples, Andrew .; Hoke, John L.; Schauer, Fred

    2011-01-01

    A comparison between experimentally measured and numerically simulated, time-averaged, point heat transfer rates in a pulse detonation (PDE) engine is presented. The comparison includes measurements and calculations for heat transfer to a cylinder in crossflow and to the tube wall itself using a novel spool design. Measurements are obtained at several locations and under several operating conditions. The measured and computed results are shown to be in substantial agreement, thereby validating the modeling approach. The model, which is based in computational fluid dynamics (CFD) is then used to interpret the results. A preheating of the incoming fuel charge is predicted, which results in increased volumetric flow and subsequent overfilling. The effect is validated with additional measurements.

  19. The Validity of the Multi-Informant Approach to Assessing Child and Adolescent Mental Health

    PubMed Central

    De Los Reyes, Andres; Augenstein, Tara M.; Wang, Mo; Thomas, Sarah A.; Drabick, Deborah A.G.; Burgers, Darcy E.; Rabinowitz, Jill

    2015-01-01

    Child and adolescent patients may display mental health concerns within some contexts and not others (e.g., home vs. school). Thus, understanding the specific contexts in which patients display concerns may assist mental health professionals in tailoring treatments to patients' needs. Consequently, clinical assessments often include reports from multiple informants who vary in the contexts in which they observe patients' behavior (e.g., patients, parents, teachers). Previous meta-analyses indicate that informants' reports correlate at low-to-moderate magnitudes. However, is it valid to interpret low correspondence among reports as indicating that patients display concerns in some contexts and not others? We meta-analyzed 341 studies published between 1989 and 2014 that reported cross-informant correspondence estimates, and observed low-to-moderate correspondence (mean internalizing: r = .25; mean externalizing: r = .30; mean overall: r = .28). Informant pair, mental health domain, and measurement method moderated magnitudes of correspondence. These robust findings have informed the development of concepts for interpreting multi-informant assessments, allowing researchers to draw specific predictions about the incremental and construct validity of these assessments. In turn, we critically evaluated research on the incremental and construct validity of the multi-informant approach to clinical child and adolescent assessment. In so doing, we identify crucial gaps in knowledge for future research, and provide recommendations for “best practices” in using and interpreting multi-informant assessments in clinical work and research. This paper has important implications for developing personalized approaches to clinical assessment, with the goal of informing techniques for tailoring treatments to target the specific contexts where patients display concerns. PMID:25915035

  20. A new technique for rapid assessment of eutrophication status of coastal waters using a support vector machine

    NASA Astrophysics Data System (ADS)

    Kong, Xianyu; Che, Xiaowei; Su, Rongguo; Zhang, Chuansong; Yao, Qingzhen; Shi, Xiaoyong

    2017-05-01

    There is an urgent need to develop efficient evaluation tools that use easily measured variables to make rapid and timely eutrophication assessments, which are important for marine health management, and to implement eutrophication monitoring programs. In this study, an approach for rapidly assessing the eutrophication status of coastal waters with three easily measured parameters (turbidity, chlorophyll a and dissolved oxygen) was developed by the grid search (GS) optimized support vector machine (SVM), with trophic index TRIX classification results as the reference. With the optimized penalty parameter C =64 and the kernel parameter γ =1, the classification accuracy rates reached 89.3% for the training data, 88.3% for the cross-validation, and 88.5% for the validation dataset. Because the developed approach only used three easy-to-measure variables, its application could facilitate the rapid assessment of the eutrophication status of coastal waters, resulting in potential cost savings in marine monitoring programs and assisting in the provision of timely advice for marine management.

  1. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning.

    PubMed

    Airola, Antti; Pyysalo, Sampo; Björne, Jari; Pahikkala, Tapio; Ginter, Filip; Salakoski, Tapio

    2008-11-19

    Automated extraction of protein-protein interactions (PPI) is an important and widely studied task in biomedical text mining. We propose a graph kernel based approach for this task. In contrast to earlier approaches to PPI extraction, the introduced all-paths graph kernel has the capability to make use of full, general dependency graphs representing the sentence structure. We evaluate the proposed method on five publicly available PPI corpora, providing the most comprehensive evaluation done for a machine learning based PPI-extraction system. We additionally perform a detailed evaluation of the effects of training and testing on different resources, providing insight into the challenges involved in applying a system beyond the data it was trained on. Our method is shown to achieve state-of-the-art performance with respect to comparable evaluations, with 56.4 F-score and 84.8 AUC on the AImed corpus. We show that the graph kernel approach performs on state-of-the-art level in PPI extraction, and note the possible extension to the task of extracting complex interactions. Cross-corpus results provide further insight into how the learning generalizes beyond individual corpora. Further, we identify several pitfalls that can make evaluations of PPI-extraction systems incomparable, or even invalid. These include incorrect cross-validation strategies and problems related to comparing F-score results achieved on different evaluation resources. Recommendations for avoiding these pitfalls are provided.

  2. Matrix cracking in laminated composites under monotonic and cyclic loadings

    NASA Technical Reports Server (NTRS)

    Allen, David H.; Lee, Jong-Won

    1991-01-01

    An analytical model based on the internal state variable (ISV) concept and the strain energy method is proposed for characterizing the monotonic and cyclic response of laminated composites containing matrix cracks. A modified constitution is formulated for angle-ply laminates under general in-plane mechanical loading and constant temperature change. A monotonic matrix cracking criterion is developed for predicting the crack density in cross-ply laminates as a function of the applied laminate axial stress. An initial formulation for a cyclic matrix cracking criterion for cross-ply laminates is also discussed. For the monotonic loading case, a number of experimental data and well-known models are compared with the present study for validating the practical applicability of the ISV approach.

  3. New strategy for drug discovery by large-scale association analysis of molecular networks of different species.

    PubMed

    Zhang, Bo; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Wu, Ziyin; Zhang, Wenjuan; Yang, Xiaoyan; Gong, Fukai; Li, Yuerong; Chen, Xiaoyu; Gao, Shuo; Chen, Xuetong; Li, Yan; Lu, Aiping; Wang, Yonghua

    2016-02-25

    The development of modern omics technology has not significantly improved the efficiency of drug development. Rather precise and targeted drug discovery remains unsolved. Here a large-scale cross-species molecular network association (CSMNA) approach for targeted drug screening from natural sources is presented. The algorithm integrates molecular network omics data from humans and 267 plants and microbes, establishing the biological relationships between them and extracting evolutionarily convergent chemicals. This technique allows the researcher to assess targeted drugs for specific human diseases based on specific plant or microbe pathways. In a perspective validation, connections between the plant Halliwell-Asada (HA) cycle and the human Nrf2-ARE pathway were verified and the manner by which the HA cycle molecules act on the human Nrf2-ARE pathway as antioxidants was determined. This shows the potential applicability of this approach in drug discovery. The current method integrates disparate evolutionary species into chemico-biologically coherent circuits, suggesting a new cross-species omics analysis strategy for rational drug development.

  4. Facial First Impressions Across Culture: Data-Driven Modeling of Chinese and British Perceivers' Unconstrained Facial Impressions.

    PubMed

    Sutherland, Clare A M; Liu, Xizi; Zhang, Lingshan; Chu, Yingtung; Oldmeadow, Julian A; Young, Andrew W

    2018-04-01

    People form first impressions from facial appearance rapidly, and these impressions can have considerable social and economic consequences. Three dimensions can explain Western perceivers' impressions of Caucasian faces: approachability, youthful-attractiveness, and dominance. Impressions along these dimensions are theorized to be based on adaptive cues to threat detection or sexual selection, making it likely that they are universal. We tested whether the same dimensions of facial impressions emerge across culture by building data-driven models of first impressions of Asian and Caucasian faces derived from Chinese and British perceivers' unconstrained judgments. We then cross-validated the dimensions with computer-generated average images. We found strong evidence for common approachability and youthful-attractiveness dimensions across perceiver and face race, with some evidence of a third dimension akin to capability. The models explained ~75% of the variance in facial impressions. In general, the findings demonstrate substantial cross-cultural agreement in facial impressions, especially on the most salient dimensions.

  5. Using a combined computational-experimental approach to predict antibody-specific B cell epitopes.

    PubMed

    Sela-Culang, Inbal; Benhnia, Mohammed Rafii-El-Idrissi; Matho, Michael H; Kaever, Thomas; Maybeno, Matt; Schlossman, Andrew; Nimrod, Guy; Li, Sheng; Xiang, Yan; Zajonc, Dirk; Crotty, Shane; Ofran, Yanay; Peters, Bjoern

    2014-04-08

    Antibody epitope mapping is crucial for understanding B cell-mediated immunity and required for characterizing therapeutic antibodies. In contrast to T cell epitope mapping, no computational tools are in widespread use for prediction of B cell epitopes. Here, we show that, utilizing the sequence of an antibody, it is possible to identify discontinuous epitopes on its cognate antigen. The predictions are based on residue-pairing preferences and other interface characteristics. We combined these antibody-specific predictions with results of cross-blocking experiments that identify groups of antibodies with overlapping epitopes to improve the predictions. We validate the high performance of this approach by mapping the epitopes of a set of antibodies against the previously uncharacterized D8 antigen, using complementary techniques to reduce method-specific biases (X-ray crystallography, peptide ELISA, deuterium exchange, and site-directed mutagenesis). These results suggest that antibody-specific computational predictions and simple cross-blocking experiments allow for accurate prediction of residues in conformational B cell epitopes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Data analysis strategies for reducing the influence of the bias in cross-cultural research.

    PubMed

    Sindik, Josko

    2012-03-01

    In cross-cultural research, researchers have to adjust the constructs and associated measurement instruments that have been developed in one culture and then imported for use in another culture. Importing concepts from other cultures is often simply reduced to language adjustment of the content in the items of the measurement instruments that define a certain (psychological) construct. In the context of cross-cultural research, test bias can be defined as a generic term for all nuisance factors that threaten the validity of cross-cultural comparisons. Bias can be an indicator that instrument scores based on the same items measure different traits and characteristics across different cultural groups. To reduce construct, method and item bias,the researcher can consider these strategies: (1) simply comparing average results in certain measuring instruments; (2) comparing only the reliability of certain dimensions of the measurement instruments, applied to the "target" and "source" samples of participants, i.e. from different cultures; (3) comparing the "framed" factor structure (fixed number of factors) of the measurement instruments, applied to the samples from the "target" and "source" cultures, using explorative factor analysis strategy on separate samples; (4) comparing the complete constructs ("unframed" factor analysis, i.e. unlimited number of factors) in relation to their best psychometric properties and the possibility of interpreting (best suited to certain cultures, applying explorative strategy of factor analysis); or (5) checking the similarity of the constructs in the samples from different cultures (using structural equation modeling approach). Each approach has its advantages and disadvantages. The advantages and lacks of each approach are discussed.

  7. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  8. Prediction of adult height in girls: the Beunen-Malina-Freitas method.

    PubMed

    Beunen, Gaston P; Malina, Robert M; Freitas, Duarte L; Thomis, Martine A; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Maes, Hermine H; Lefevre, Johan

    2011-12-01

    The purpose of this study was to validate and cross-validate the Beunen-Malina-Freitas method for non-invasive prediction of adult height in girls. A sample of 420 girls aged 10-15 years from the Madeira Growth Study were measured at yearly intervals and then 8 years later. Anthropometric dimensions (lengths, breadths, circumferences, and skinfolds) were measured; skeletal age was assessed using the Tanner-Whitehouse 3 method and menarcheal status (present or absent) was recorded. Adult height was measured and predicted using stepwise, forward, and maximum R (2) regression techniques. Multiple correlations, mean differences, standard errors of prediction, and error boundaries were calculated. A sample of the Leuven Longitudinal Twin Study was used to cross-validate the regressions. Age-specific coefficients of determination (R (2)) between predicted and measured adult height varied between 0.57 and 0.96, while standard errors of prediction varied between 1.1 and 3.9 cm. The cross-validation confirmed the validity of the Beunen-Malina-Freitas method in girls aged 12-15 years, but at lower ages the cross-validation was less consistent. We conclude that the Beunen-Malina-Freitas method is valid for the prediction of adult height in girls aged 12-15 years. It is applicable to European populations or populations of European ancestry.

  9. Cross-cultural validation of instruments measuring health beliefs about colorectal cancer screening among Korean Americans.

    PubMed

    Lee, Shin-Young; Lee, Eunice E

    2015-02-01

    The purpose of this study was to report the instrument modification and validation processes to make existing health belief model scales culturally appropriate for Korean Americans (KAs) regarding colorectal cancer (CRC) screening utilization. Instrument translation, individual interviews using cognitive interviewing, and expert reviews were conducted during the instrument modification phase, and a pilot test and a cross-sectional survey were conducted during the instrument validation phase. Data analyses of the cross-sectional survey included internal consistency and construct validity using exploratory and confirmatory factor analysis. The main issues identified during the instrument modification phase were (a) cultural and linguistic translation issues and (b) newly developed items reflecting Korean cultural barriers. Cross-sectional survey analyses during the instrument validation phase revealed that all scales demonstrate good internal consistency reliability (Cronbach's alpha=.72~.88). Exploratory factor analysis showed that susceptibility and severity loaded on the same factor, which may indicate a threat variable. Items with low factor loadings in the confirmatory factor analysis may relate to (a) lack of knowledge about fecal occult blood testing and (b) multiple dimensions of the subscales. Methodological, sequential processes of instrument modification and validation, including translation, individual interviews, expert reviews, pilot testing and a cross-sectional survey, were provided in this study. The findings indicate that existing instruments need to be examined for CRC screening research involving KAs.

  10. INTERPRETING PHYSICAL AND BEHAVIORAL HEALTH SCORES FROM NEW WORK DISABILITY INSTRUMENTS

    PubMed Central

    Marfeo, Elizabeth E.; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K.; McDonough, Christine M.; Brandt, Diane E.; Bogusz, Kara; Jette, Alan M.

    2015-01-01

    Objective To develop a system to guide interpretation of scores generated from 2 new instruments measuring work-related physical and behavioral health functioning (Work Disability – Physical Function (WD-PF) and WD – Behavioral Function (WD-BH)). Design Cross-sectional, secondary data from 3 independent samples to develop and validate the functional levels for physical and behavioral health functioning. Subjects Physical group: 999 general adult subjects, 1,017 disability applicants and 497 work-disabled subjects. Behavioral health group: 1,000 general adult subjects, 1,015 disability applicants and 476 work-disabled subjects. Methods Three-phase analytic approach including item mapping, a modified-Delphi technique, and known-groups validation analysis were used to develop and validate cut-points for functional levels within each of the WD-PF and WD-BH instrument’s scales. Results Four and 5 functional levels were developed for each of the scales in the WD-PF and WD-BH instruments. Distribution of the comparative samples was in the expected direction: the general adult samples consistently demonstrated scores at higher functional levels compared with the claimant and work-disabled samples. Conclusion Using an item-response theory-based methodology paired with a qualitative process appears to be a feasible and valid approach for translating the WD-BH and WD-PF scores into meaningful levels useful for interpreting a person’s work-related physical and behavioral health functioning. PMID:25729901

  11. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298

  12. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    PubMed

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.

  13. Sexual Dysfunction in Breast Cancer Survivors: Cross-Cultural Adaptation of the Sexual Activity Questionnaire for Use in Portugal.

    PubMed

    da Costa, Filipa Alves; Ribeiro, Manuel Castro; Braga, Sofia; Carvalho, Elisabete; Francisco, Fátima; Miranda, Ana Costa; Moreira, António; Fallowfield, Lesley

    2016-09-01

    The increasing survivor population of breast cancer has shifted research and practice interests into the impacts of the disease and treatment in quality of life aspects. The lack of tools available in Portuguese to objectively evaluate sexual function led to the development of this study, which aimed to cross-culturally adapt and validate the Sexual Activity Questionnaire for use in Portugal. The questionnaire was translated and back-translated, refined following face-to-face interviews with seven breast cancer survivors, and then self-administered by a larger sample at baseline and a fortnight later to test validity and reliability. Following cognitive debriefing (n = 7), minor changes were made and the Sexual Activity Questionnaire was then tested with 134 breast cancer survivors. A 3-factor structure explained 75.5% of the variance, comprising the Pleasure, Habit and Discomfort scales, all yielding good internal consistency (Cronbach's α > 0.70). Concurrent validity with the FACt-An and the BCPT checklist was good (Spearman's r > 0.65; p-value < 0.001) and reliability acceptable (Cohen's k > 0.444). The Sexual Activity Questionnaire allowed the identification of 23.9% of sexually inactive women, for whom the main reasons were lack of interest or motivation and not having a partner. Patient-reported outcomes led to a more comprehensive and improved approach to cancer, tackling areas previously abandoned. Future research should focus on the validation of this scale in samples with different characteristics and even in the overall population to enable generalizability of the findings. The adapted Sexual Activity Questionnaire is a valid tool for assessing sexual function in breast cancer survivors in Portugal.

  14. Diagnostic accuracy of eye movements in assessing pedophilia.

    PubMed

    Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Witzel, Joachim; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo

    2012-07-01

    Given that recurrent sexual interest in prepubescent children is one of the strongest single predictors for pedosexual offense recidivism, valid and reliable diagnosis of pedophilia is of particular importance. Nevertheless, current assessment methods still fail to fulfill psychometric quality criteria. The aim of the study was to evaluate the diagnostic accuracy of eye-movement parameters in regard to pedophilic sexual preferences. Eye movements were measured while 22 pedophiles (according to ICD-10 F65.4 diagnosis), 8 non-pedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult. Fixation latency was assessed as a parameter for automatic attentional processes and relative fixation time to account for controlled attentional processes. Receiver operating characteristic (ROC) analyses, which are based on calculated age-preference indices, were carried out to determine the classifier performance. Cross-validation using the leave-one-out method was used to test the validity of classifiers. Pedophiles showed significantly shorter fixation latencies and significantly longer relative fixation times for child stimuli than either of the control groups. Classifier performance analysis revealed an area under the curve (AUC) = 0.902 for fixation latency and an AUC = 0.828 for relative fixation time. The eye-tracking method based on fixation latency discriminated between pedophiles and non-pedophiles with a sensitivity of 86.4% and a specificity of 90.0%. Cross-validation demonstrated good validity of eye-movement parameters. Despite some methodological limitations, measuring eye movements seems to be a promising approach to assess deviant pedophilic interests. Eye movements, which represent automatic attentional processes, demonstrated high diagnostic accuracy. © 2012 International Society for Sexual Medicine.

  15. Law, ethics and pandemic preparedness: the importance of cross-jurisdictional and cross-cultural perspectives.

    PubMed

    Bennett, Belinda; Carney, Terry

    2010-04-01

    To explore social equity, health planning, regulatory and ethical dilemmas in responding to a pandemic influenza (H5N1) outbreak, and the adequacy of protocols and standards such as the International Health Regulations (2005). This paper analyses the role of legal and ethical considerations for pandemic preparedness, including an exploration of the relevance of cross-jurisdictional and cross-cultural perspectives in assessing the validity of goals for harmonisation of laws and policies both within and between nations. Australian and international experience is reviewed in various areas, including distribution of vaccines during a pandemic, the distribution of authority between national and local levels of government, and global and regional equity issues for poorer countries. This paper finds that questions such as those of distributional justice (resource allocation) and regulatory frameworks raise important issues about the cultural and ethical acceptability of planning measures. Serious doubt is cast on a 'one size fits all' approach to international planning for managing a pandemic. It is concluded that a more nuanced approach than that contained in international guidelines may be required if an effective response is to be constructed internationally. The paper commends the wisdom of reliance on 'soft law', international guidance that leaves plenty of room for each nation to construct its response in conformity with its own cultural and value requirements. © 2010 The Authors. Journal Compilation © 2010 Public Health Association of Australia.

  16. Measuring accident risk exposure for pedestrians in different micro-environments.

    PubMed

    Lassarre, Sylvain; Papadimitriou, Eleonora; Yannis, George; Golias, John

    2007-11-01

    Pedestrians are mainly exposed to the risk of road accident when crossing a road in urban areas. Traditionally in the road safety field, the risk of accident for pedestrian is estimated as a rate of accident involvement per unit of time spent on the road network. The objective of this research is to develop an approach of accident risk based on the concept of risk exposure used in environmental epidemiology, such as in the case of exposure to pollutants. This type of indicator would be useful for comparing the effects of urban transportation policy scenarios on pedestrian safety. The first step is to create an indicator of pedestrians' exposure, which is based on motorised vehicles' "concentration" by lane and also takes account of traffic speed and time spent to cross. This is applied to two specific micro-environments: junctions and mid-block locations. A model of pedestrians' crossing behaviour along a trip is then developed, based on a hierarchical choice between junctions and mid-block locations and taking account of origin and destination, traffic characteristics and pedestrian facilities. Finally, a complete framework is produced for modelling pedestrians' exposure in the light of their crossing behaviour. The feasibility of this approach is demonstrated on an artificial network and a first set of results is obtained from the validation of the models in observational studies.

  17. Deconvolution When Classifying Noisy Data Involving Transformations.

    PubMed

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  18. Estimating Ground-Level PM(sub 2.5) Concentrations in the Southeastern United States Using MAIAC AOD Retrievals and a Two-Stage Model

    NASA Technical Reports Server (NTRS)

    Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Al-Hamdan, Mohammad Z.; Crosson, William L.; Estes, Maurice G., Jr.; Estes, Sue M.; Quattrochi, Dale A.; Puttaswamy, Sweta Jinnagara; hide

    2013-01-01

    Previous studies showed that fine particulate matter (PM(sub 2.5), particles smaller than 2.5 micrometers in aerodynamic diameter) is associated with various health outcomes. Ground in situ measurements of PM(sub 2.5) concentrations are considered to be the gold standard, but are time-consuming and costly. Satellite-retrieved aerosol optical depth (AOD) products have the potential to supplement the ground monitoring networks to provide spatiotemporally-resolved PM(sub 2.5) exposure estimates. However, the coarse resolutions (e.g., 10 km) of the satellite AOD products used in previous studies make it very difficult to estimate urban-scale PM(sub 2.5) characteristics that are crucial to population-based PM(sub 2.5) health effects research. In this paper, a new aerosol product with 1 km spatial resolution derived by the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm was examined using a two-stage spatial statistical model with meteorological fields (e.g., wind speed) and land use parameters (e.g., forest cover, road length, elevation, and point emissions) as ancillary variables to estimate daily mean PM(sub 2.5) concentrations. The study area is the southeastern U.S., and data for 2003 were collected from various sources. A cross validation approach was implemented for model validation. We obtained R(sup 2) of 0.83, mean prediction error (MPE) of 1.89 micrograms/cu m, and square root of the mean squared prediction errors (RMSPE) of 2.73 micrograms/cu m in model fitting, and R(sup 2) of 0.67, MPE of 2.54 micrograms/cu m, and RMSPE of 3.88 micrograms/cu m in cross validation. Both model fitting and cross validation indicate a good fit between the dependent variable and predictor variables. The results showed that 1 km spatial resolution MAIAC AOD can be used to estimate PM(sub 2.5) concentrations.

  19. Assessing cross-cultural validity of scales: a methodological review and illustrative example.

    PubMed

    Beckstead, Jason W; Yang, Chiu-Yueh; Lengacher, Cecile A

    2008-01-01

    In this article, we assessed the cross-cultural validity of the Women's Role Strain Inventory (WRSI), a multi-item instrument that assesses the degree of strain experienced by women who juggle the roles of working professional, student, wife and mother. Cross-cultural validity is evinced by demonstrating the measurement invariance of the WRSI. Measurement invariance is the extent to which items of multi-item scales function in the same way across different samples of respondents. We assessed measurement invariance by comparing a sample of working women in Taiwan with a similar sample from the United States. Structural equation models (SEMs) were employed to determine the invariance of the WRSI and to estimate the unique validity variance of its items. This article also provides nurse-researchers with the necessary underlying measurement theory and illustrates how SEMs may be applied to assess cross-cultural validity of instruments used in nursing research. Overall performance of the WRSI was acceptable but our analysis showed that some items did not display invariance properties across samples. Item analysis is presented and recommendations for improving the instrument are discussed.

  20. Theoretical description of RESPIRATION-CP

    NASA Astrophysics Data System (ADS)

    Nielsen, Anders B.; Tan, Kong Ooi; Shankar, Ravi; Penzel, Susanne; Cadalbert, Riccardo; Samoson, Ago; Meier, Beat H.; Ernst, Matthias

    2016-02-01

    We present a quintuple-mode operator-based Floquet approach to describe arbitrary amplitude modulated cross polarization experiments under magic-angle spinning (MAS). The description is used to analyze variants of the RESPIRATION approach (RESPIRATIONCP) where recoupling conditions and the corresponding first-order effective Hamiltonians are calculated, validated numerically and compared to experimental results for 15N-13C coherence transfer in uniformly 13C,15N-labeled alanine and in uniformly 2H,13C,15N-labeled (deuterated and 100% back-exchanged) ubiquitin at spinning frequencies of 16.7 and 90.9 kHz. Similarities and differences between different implementations of the RESPIRATIONCP sequence using either CW irradiation or small flip-angle pulses are discussed.

  1. Structure–activity relationships study of mTOR kinase inhibition using QSAR and structure-based drug design approaches

    PubMed Central

    Lakhlili, Wiame; Yasri, Abdelaziz; Ibrahimi, Azeddine

    2016-01-01

    The discovery of clinically relevant inhibitors of mammalian target of rapamycin (mTOR) for anticancer therapy has proved to be a challenging task. The quantitative structure–activity relationship (QSAR) approach is a very useful and widespread technique for ligand-based drug design, which can be used to identify novel and potent mTOR inhibitors. In this study, we performed two-dimensional QSAR tests, and molecular docking validation tests of a series of mTOR ATP-competitive inhibitors to elucidate their structural properties associated with their activity. The QSAR tests were performed using partial least square method with a correlation coefficient of r2=0.799 and a cross-validation of q2=0.714. The chemical library screening was done by associating ligand-based to structure-based approach using the three-dimensional structure of mTOR developed by homology modeling. We were able to select 22 compounds from two databases as inhibitors of the mTOR kinase active site. We believe that the method and applications highlighted in this study will help future efforts toward the design of selective ATP-competitive inhibitors. PMID:27980424

  2. A linear programming computational framework integrates phosphor-proteomics and prior knowledge to predict drug efficacy.

    PubMed

    Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei

    2017-12-21

    In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.

  3. Triaging Patient Complaints: Monte Carlo Cross-Validation of Six Machine Learning Classifiers

    PubMed Central

    Cooper, William O; Catron, Thomas F; Karrass, Jan; Zhang, Zhe; Singh, Munindar P

    2017-01-01

    Background Unsolicited patient complaints can be a useful service recovery tool for health care organizations. Some patient complaints contain information that may necessitate further action on the part of the health care organization and/or the health care professional. Current approaches depend on the manual processing of patient complaints, which can be costly, slow, and challenging in terms of scalability. Objective The aim of this study was to evaluate automatic patient triage, which can potentially improve response time and provide much-needed scale, thereby enhancing opportunities to encourage physicians to self-regulate. Methods We implemented a comparison of several well-known machine learning classifiers to detect whether a complaint was associated with a physician or his/her medical practice. We compared these classifiers using a real-life dataset containing 14,335 patient complaints associated with 768 physicians that was extracted from patient complaints collected by the Patient Advocacy Reporting System developed at Vanderbilt University and associated institutions. We conducted a 10-splits Monte Carlo cross-validation to validate our results. Results We achieved an accuracy of 82% and F-score of 81% in correctly classifying patient complaints with sensitivity and specificity of 0.76 and 0.87, respectively. Conclusions We demonstrate that natural language processing methods based on modeling patient complaint text can be effective in identifying those patient complaints requiring physician action. PMID:28760726

  4. Cross-cultural adaptation and psychometric evaluation of the Juvenile Arthritis Multidimensional Assessment Report (JAMAR) in 54 languages across 52 countries: review of the general methodology.

    PubMed

    Bovis, Francesca; Consolaro, Alessandro; Pistorio, Angela; Garrone, Marco; Scala, Silvia; Patrone, Elisa; Rinaldi, Mariangela; Villa, Luca; Martini, Alberto; Ravelli, Angelo; Ruperto, Nicolino

    2018-04-01

    The aim of this project was to cross-culturally adapt and validate the Juvenile Arthritis Multidimensional Assessment Report (JAMAR) questionnaire in 54 languages across 52 different countries that are members of the Paediatric Rheumatology International Trials Organisation (PRINTO). This effort was part of a wider project named Epidemiology and Outcome of Children with Arthritis (EPOCA) to obtain information on the frequency of juvenile idiopathic arthritis (JIA) categories in different geographic areas, the therapeutic approaches adopted, and the disease status of children with JIA currently followed worldwide. A total of 13,843 subjects were enrolled from the 49 countries that took part both in the cross-cultural adaptation phase and in the related validation and data collection: Algeria, Argentina, Belgium, Brazil, Bulgaria, Canada, Chile, Colombia, Croatia, Czech Republic, Denmark, Ecuador, Egypt, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, India, Islamic Republic of Iran, Israel, Italy, Latvia, Libya, Lithuania, Mexico, Netherlands, Norway, Oman, Paraguay, Poland, Portugal, Romania, Russian Federation, Saudi Arabia, Serbia, Slovakia, Slovenia, South Africa, Spain, Sweden, Switzerland, Thailand, Turkey, Ukraine, United Kingdom and United States of America. 9021 patients had JIA (10.7% systemic arthritis, 41.9% oligoarthritis, 23.5% RF negative polyarthritis, 4.2% RF positive polyarthritis, 3.4% psoriatic arthritis, 10.6% enthesitis-related arthritis and 5.7% undifferentiated arthritis) while 4822 were healthy children. This introductory paper describes the overall methodology; results pertaining to each country are fully described in the accompanying manuscripts. In conclusion, the JAMAR translations were found to have satisfactory psychometric properties and it is thus a reliable and valid tool for the multidimensional assessment of children with JIA.

  5. Multiple receptor conformation docking and dock pose clustering as tool for CoMFA and CoMSIA analysis - a case study on HIV-1 protease inhibitors.

    PubMed

    Sivan, Sree Kanth; Manga, Vijjulatha

    2012-02-01

    Multiple receptors conformation docking (MRCD) and clustering of dock poses allows seamless incorporation of receptor binding conformation of the molecules on wide range of ligands with varied structural scaffold. The accuracy of the approach was tested on a set of 120 cyclic urea molecules having HIV-1 protease inhibitory activity using 12 high resolution X-ray crystal structures and one NMR resolved conformation of HIV-1 protease extracted from protein data bank. A cross validation was performed on 25 non-cyclic urea HIV-1 protease inhibitor having varied structures. The comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models were generated using 60 molecules in the training set by applying leave one out cross validation method, r (loo) (2) values of 0.598 and 0.674 for CoMFA and CoMSIA respectively and non-cross validated regression coefficient r(2) values of 0.983 and 0.985 were obtained for CoMFA and CoMSIA respectively. The predictive ability of these models was determined using a test set of 60 cyclic urea molecules that gave predictive correlation (r (pred) (2) ) of 0.684 and 0.64 respectively for CoMFA and CoMSIA indicating good internal predictive ability. Based on this information 25 non-cyclic urea molecules were taken as a test set to check the external predictive ability of these models. This gave remarkable out come with r (pred) (2) of 0.61 and 0.53 for CoMFA and CoMSIA respectively. The results invariably show that this method is useful for performing 3D QSAR analysis on molecules having different structural motifs.

  6. Validity of Hansen-Roach cross sections in low-enriched uranium systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, R.D.; O'Dell, R.D.

    Within the nuclear criticality safety community, the Hansen-Roach 16 group cross section set has been the standard'' for use in k{sub eff} calculations over the past 30 years. Yet even with its widespread acceptance, there are still questions about its validity and adequacy, about the proper procedure for calculating the potential scattering cross section, {sigma}{sub p}, for uranium and plutonium, and about the concept of resonance self shielding and its impact on cross sections. This paper attempts to address these questions. It provides a brief background on the Hansen-Roach cross sections. Next is presented a review of resonances in crossmore » sections, self shielding of these resonances, and the use of {sigma}{sub p} to characterize resonance self shielding. Three prescriptions for calculating {sigma}{sub p} are given. Finally, results of several calculations of k{sub eff} on low-enriched uranium systems are provided to confirm the validity of the Hansen-Roach cross sections when applied to such systems.« less

  7. Cross-cultural adaptation and validation of Persian Achilles tendon Total Rupture Score.

    PubMed

    Ansari, Noureddin Nakhostin; Naghdi, Soofia; Hasanvand, Sahar; Fakhari, Zahra; Kordi, Ramin; Nilsson-Helander, Katarina

    2016-04-01

    To cross-culturally adapt the Achilles tendon Total Rupture Score (ATRS) to Persian language and to preliminary evaluate the reliability and validity of a Persian ATRS. A cross-sectional and prospective cohort study was conducted to translate and cross-culturally adapt the ATRS to Persian language (ATRS-Persian) following steps described in guidelines. Thirty patients with total Achilles tendon rupture and 30 healthy subjects participated in this study. Psychometric properties of floor/ceiling effects (responsiveness), internal consistency reliability, test-retest reliability, standard error of measurement (SEM), smallest detectable change (SDC), construct validity, and discriminant validity were tested. Factor analysis was performed to determine the ATRS-Persian structure. There were no floor or ceiling effects that indicate the content and responsiveness of ATRS-Persian. Internal consistency was high (Cronbach's α 0.95). Item-total correlations exceeded acceptable standard of 0.3 for the all items (0.58-0.95). The test-retest reliability was excellent [(ICC)agreement 0.98]. SEM and SDC were 3.57 and 9.9, respectively. Construct validity was supported by a significant correlation between the ATRS-Persian total score and the Persian Foot and Ankle Outcome Score (PFAOS) total score and PFAOS subscales (r = 0.55-0.83). The ATRS-Persian significantly discriminated between patients and healthy subjects. Explanatory factor analysis revealed 1 component. The ATRS was cross-culturally adapted to Persian and demonstrated to be a reliable and valid instrument to measure functional outcomes in Persian patients with Achilles tendon rupture. II.

  8. HAMDA: Hybrid Approach for MiRNA-Disease Association prediction.

    PubMed

    Chen, Xing; Niu, Ya-Wei; Wang, Guang-Hui; Yan, Gui-Ying

    2017-12-01

    For decades, enormous experimental researches have collectively indicated that microRNA (miRNA) could play indispensable roles in many critical biological processes and thus also the pathogenesis of human complex diseases. Whereas the resource and time cost required in traditional biology experiments are expensive, more and more attentions have been paid to the development of effective and feasible computational methods for predicting potential associations between disease and miRNA. In this study, we developed a computational model of Hybrid Approach for MiRNA-Disease Association prediction (HAMDA), which involved the hybrid graph-based recommendation algorithm, to reveal novel miRNA-disease associations by integrating experimentally verified miRNA-disease associations, disease semantic similarity, miRNA functional similarity, and Gaussian interaction profile kernel similarity into a recommendation algorithm. HAMDA took not only network structure and information propagation but also node attribution into consideration, resulting in a satisfactory prediction performance. Specifically, HAMDA obtained AUCs of 0.9035 and 0.8395 in the frameworks of global and local leave-one-out cross validation, respectively. Meanwhile, HAMDA also achieved good performance with AUC of 0.8965 ± 0.0012 in 5-fold cross validation. Additionally, we conducted case studies about three important human cancers for performance evaluation of HAMDA. As a result, 90% (Lymphoma), 86% (Prostate Cancer) and 92% (Kidney Cancer) of top 50 predicted miRNAs were confirmed by recent experiment literature, which showed the reliable prediction ability of HAMDA. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Predicting drug-induced liver injury in human with Naïve Bayes classifier approach.

    PubMed

    Zhang, Hui; Ding, Lan; Zou, Yi; Hu, Shui-Qing; Huang, Hai-Guo; Kong, Wei-Bao; Zhang, Ji

    2016-10-01

    Drug-induced liver injury (DILI) is one of the major safety concerns in drug development. Although various toxicological studies assessing DILI risk have been developed, these methods were not sufficient in predicting DILI in humans. Thus, developing new tools and approaches to better predict DILI risk in humans has become an important and urgent task. In this study, we aimed to develop a computational model for assessment of the DILI risk with using a larger scale human dataset and Naïve Bayes classifier. The established Naïve Bayes prediction model was evaluated by 5-fold cross validation and an external test set. For the training set, the overall prediction accuracy of the 5-fold cross validation was 94.0 %. The sensitivity, specificity, positive predictive value and negative predictive value were 97.1, 89.2, 93.5 and 95.1 %, respectively. The test set with the concordance of 72.6 %, sensitivity of 72.5 %, specificity of 72.7 %, positive predictive value of 80.4 %, negative predictive value of 63.2 %. Furthermore, some important molecular descriptors related to DILI risk and some toxic/non-toxic fragments were identified. Thus, we hope the prediction model established here would be employed for the assessment of human DILI risk, and the obtained molecular descriptors and substructures should be taken into consideration in the design of new candidate compounds to help medicinal chemists rationally select the chemicals with the best prospects to be effective and safe.

  10. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  11. Early detection of lung cancer recurrence after stereotactic ablative radiation therapy: radiomics system design

    NASA Astrophysics Data System (ADS)

    Dammak, Salma; Palma, David; Mattonen, Sarah; Senan, Suresh; Ward, Aaron D.

    2018-02-01

    Stereotactic ablative radiotherapy (SABR) is the standard treatment recommendation for Stage I non-small cell lung cancer (NSCLC) patients who are inoperable or who refuse surgery. This option is well tolerated by even unfit patients and has a low recurrence risk post-treatment. However, SABR induces changes in the lung parenchyma that can appear similar to those of recurrence, and the difference between the two at an early follow-up time point is not easily distinguishable for an expert physician. We hypothesized that a radiomics signature derived from standard-of-care computed tomography (CT) imaging can detect cancer recurrence within six months of SABR treatment. This study reports on the design phase of our work, with external validation planned in future work. In this study, we performed cross-validation experiments with four feature selection approaches and seven classifiers on an 81-patient data set. We extracted 104 radiomics features from the consolidative and the peri-consolidative regions on the follow-up CT scans. The best results were achieved using the sum of estimated Mahalanobis distances (Maha) for supervised forward feature selection and a trainable automatic radial basis support vector classifier (RBSVC). This system produced an area under the receiver operating characteristic curve (AUC) of 0.84, an error rate of 16.4%, a false negative rate of 12.7%, and a false positive rate of 20.0% for leaveone patient out cross-validation. This suggests that once validated on an external data set, radiomics could reliably detect post-SABR recurrence and form the basis of a tool assisting physicians in making salvage treatment decisions.

  12. Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent

    NASA Astrophysics Data System (ADS)

    Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.

    2014-06-01

    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.

  13. Using the NANA toolkit at home to predict older adults' future depression.

    PubMed

    Andrews, J A; Harrison, R F; Brown, L J E; MacLean, L M; Hwang, F; Smith, T; Williams, E A; Timon, C; Adlam, T; Khadra, H; Astell, A J

    2017-04-15

    Depression is currently underdiagnosed among older adults. As part of the Novel Assessment of Nutrition and Aging (NANA) validation study, 40 older adults self-reported their mood using a touchscreen computer over three, one-week periods. Here, we demonstrate the potential of these data to predict future depression status. We analysed data from the NANA validation study using a machine learning approach. We applied the least absolute shrinkage and selection operator with a logistic model to averages of six measures of mood, with depression status according to the Geriatric Depression Scale 10 weeks later as the outcome variable. We tested multiple values of the selection parameter in order to produce a model with low deviance. We used a cross-validation framework to avoid overspecialisation, and receiver operating characteristic (ROC) curve analysis to determine the quality of the fitted model. The model we report contained coefficients for two variables: sadness and tiredness, as well as a constant. The cross-validated area under the ROC curve for this model was 0.88 (CI: 0.69-0.97). While results are based on a small sample, the methodology for the selection of variables appears suitable for the problem at hand, suggesting promise for a wider study and ultimate deployment with older adults at increased risk of depression. We have identified self-reported scales of sadness and tiredness as sensitive measures which have the potential to predict future depression status in older adults, partially addressing the problem of underdiagnosis. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Cross-validation of clinical characteristics and treatment patterns associated with phenotypes for lithium response defined by the Alda scale.

    PubMed

    Scott, Jan; Geoffroy, Pierre Alexis; Sportiche, Sarah; Brichant-Petit-Jean, Clara; Gard, Sebastien; Kahn, Jean-Pierre; Azorin, Jean-Michel; Henry, Chantal; Etain, Bruno; Bellivier, Frank

    2017-01-15

    It is increasingly recognised that reliable and valid assessments of lithium response are needed in order to target more efficiently the use of this medication in bipolar disorders (BD) and to identify genotypes, endophenotypes and biomarkers of response. In a large, multi-centre, clinically representative sample of 300 cases of BD, we assess external clinical validators of lithium response phenotypes as defined using three different recommended approaches to scoring the Alda lithium response scale. The scale comprises an A scale (rating lithium response) and a B scale (assessing confounders). Analysis of the two continuous scoring methods (A scale score minus the B scale score, or A scale score in those with a low B scale score) demonstrated that 21-23% of the explained variance in lithium response was accounted for by a positive family history of BD I and the early introduction of lithium. Categorical definitions of response suggest poor response is also associated with a positive history of alcohol and/or substance use comorbidities. High B scale scores were significantly associated with longer duration of illness prior to receiving lithium and the presence of psychotic symptoms. The original sample was not recruited specifically to study lithium response. The Alda scale is designed to assess response retrospectively. This cross-validation study identifies different clinical phenotypes of lithium response when defined by continuous or categorical measures. Future clinical, genetic and biomarker studies should report both the findings and the method employed to assess lithium response according to the Alda scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Agility performance in high-level junior basketball players: the predictive value of anthropometrics and power qualities.

    PubMed

    Sisic, Nedim; Jelicic, Mario; Pehar, Miran; Spasic, Miodrag; Sekulic, Damir

    2016-01-01

    In basketball, anthropometric status is an important factor when identifying and selecting talents, while agility is one of the most vital motor performances. The aim of this investigation was to evaluate the influence of anthropometric variables and power capacities on different preplanned agility performances. The participants were 92 high-level, junior-age basketball players (16-17 years of age; 187.6±8.72 cm in body height, 78.40±12.26 kg in body mass), randomly divided into a validation and cross-validation subsample. The predictors set consisted of 16 anthropometric variables, three tests of power-capacities (Sargent-jump, broad-jump and medicine-ball-throw) as predictors. The criteria were three tests of agility: a T-Shape-Test; a Zig-Zag-Test, and a test of running with a 180-degree turn (T180). Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between observed and predicted scores, dependent samples t-test between predicted and observed scores; and Bland Altman graphics. Analysis of the variance identified centres being advanced in most of the anthropometric indices, and medicine-ball-throw (all at P<0.05); with no significant between-position-differences for other studied motor performances. Multiple regression models originally calculated for the validation subsample were then cross-validated, and confirmed for Zig-zag-Test (R of 0.71 and 0.72 for the validation and cross-validation subsample, respectively). Anthropometrics were not strongly related to agility performance, but leg length is found to be negatively associated with performance in basketball-specific agility. Power capacities are confirmed to be an important factor in agility. The results highlighted the importance of sport-specific tests when studying pre-planned agility performance in basketball. The improvement in power capacities will probably result in an improvement in agility in basketball athletes, while anthropometric indices should be used in order to identify those athletes who can achieve superior agility performance.

  16. TENI: A comprehensive battery for cognitive assessment based on games and technology.

    PubMed

    Delgado, Marcela Tenorio; Uribe, Paulina Arango; Alonso, Andrés Aparicio; Díaz, Ricardo Rosas

    2016-01-01

    TENI (Test de Evaluación Neuropsicológica Infantil) is an instrument developed to assess cognitive abilities in children between 3 and 9 years of age. It is based on a model that incorporates games and technology as tools to improve the assessment of children's capacities. The test was standardized with two Chilean samples of 524 and 82 children living in urban zones. Evidence of reliability and validity based on current standards is presented. Data show good levels of reliability for all subtests. Some evidence of validity in terms of content, test structure, and association with other variables is presented. This instrument represents a novel approach and a new frontier in cognitive assessment. Further studies with clinical, rural, and cross-cultural populations are required.

  17. Evaluation of using the Chinese version of the Spirituality Index of Well-Being (SIWB) scale in Taiwanese elders.

    PubMed

    Lee, Yi-Hui; Salman, Ali

    2016-11-01

    Spirituality and spiritual well-being have emerged as important indicators for one's quality of life and health outcomes. Nursing as a profession is concerned with a holistic approach to improve health and overall well-being. To evaluate the outcomes of holistic nursing interventions, using valid and reliable instruments to assess spiritual well-being becomes necessary. There is a lack of instruments for measuring spiritual well-being in Chinese populations. Little has been known about the feasibility of using the Spirituality Index of Well-Being (SIWB) in Taiwanese elders. The purpose of this cross-sectional study was to evaluate the uses of the translated Chinese version of the Spirituality Index of Well-Being (SIWB-C) with Taiwanese elders. A total of 150 individual who were 65 years old or older and living in southern Taiwan were recruited from a public community center. A four-step procedure was used to translate the English version of the SIWB to the traditional Chinese language. Internal consistency, factor analysis, and correlation coefficient were conducted to evaluate the reliability and validity of the SIWB-C. The SIWB-C demonstrated a high internal consistency with Cronbach's alpha .95. The construct validity of SIWB-C was supported by factor analysis and by significant correlations with its subscales and the CES-D scale. The psychometric analysis indicates that the SIWB-C is a valid and reliable instrument for measuring spiritual well-being. This instrument provides a feasible and valid approach for assessing Taiwanese elders' spiritual well-being in the future. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  19. Mental State Assessment and Validation Using Personalized Physiological Biometrics

    PubMed Central

    Patel, Aashish N.; Howard, Michael D.; Roach, Shane M.; Jones, Aaron P.; Bryant, Natalie B.; Robinson, Charles S. H.; Clark, Vincent P.; Pilly, Praveen K.

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k-fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  20. Mental State Assessment and Validation Using Personalized Physiological Biometrics.

    PubMed

    Patel, Aashish N; Howard, Michael D; Roach, Shane M; Jones, Aaron P; Bryant, Natalie B; Robinson, Charles S H; Clark, Vincent P; Pilly, Praveen K

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k -fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  1. Validation and psychometric evaluation of physical activity belief scale among patients with type 2 diabetes mellitus: an application of health action process approach.

    PubMed

    Rohani, Hosein; Eslami, Ahmad Ali; Ghaderi, Arsalan; Jafari-Koshki, Tohid; Sadeghi, Erfan; Bidkhori, Mohammad; Raei, Mehdi

    2016-01-01

    Moderate increase in physical activity (PA) may be helpful in preventing or postponing the complications of type 2 diabetes mellitus (T2DM). The aim of this study was to assess the psychometric properties of a health action process approach (HAPA)-based PA inventory among T2DM patients. In 2015, this cross-sectional study was carried out on 203 participants recruited by convenience sampling in Isfahan, Iran. Content and face validity was confirmed by a panel of experts. The comments noted by 9 outpatients on the inventory were also investigated. Then,the items were administered to 203 T2DM patients. Construct validity was conducted using exploratory and structural equation modeling confirmatory factor analyses. Reliability was also assessed with Cronbach alpha and interclass correlation coefficient (ICC). Content validity was acceptable (CVR = 0.62, CVI = 0.89). Exploratory factor analysis extracted seven factors (risk- perception, action self-efficacy, outcome expectancies, maintenance self-efficacy, action and coping planning, behavioral intention, and recovery self-efficacy) explaining 82.23% of the variation. The HAPA had an acceptable fit to the observations (χ2 = 3.21, df = 3, P = 0.38; RMSEA = 0.06; AGFI = 0.90; PGFI = 0.12). The range of Cronbach alpha and ICC for the scales was about 0.63 to 0.97 and 0.862 to 0.988, respectively. The findings of the present study provided an initial support for the reliability and validity of the HAPA-based PA inventory among patients with T2DM.

  2. Multivariate Adaptive Regression Splines (Preprint)

    DTIC Science & Technology

    1990-08-01

    fold cross -validation would take about ten time as long, and MARS is not all that fast to begin with. Friedman has a number of examples showing...standardized mean squared error of prediction (MSEP), the generalized cross validation (GCV), and the number of selected terms (TERMS). In accordance with...and mi= 10 case were almost exclusively spurious cross product terms and terms involving the nuisance variables x6 through xlo. This large number of

  3. Cross-coupled control for all-terrain rovers.

    PubMed

    Reina, Giulio

    2013-01-08

    Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors.

  4. SNP markers tightly linked to root knot nematode resistance in grapevine (Vitis cinerea) identified by a genotyping-by-sequencing approach followed by Sequenom MassARRAY validation

    PubMed Central

    Morales, Norma B.; Moskwa, Sam; Clingeleffer, Peter R.; Thomas, Mark R.

    2018-01-01

    Plant parasitic nematodes, including root knot nematode Meloidogyne species, cause extensive damage to agriculture and horticultural crops. As Vitis vinifera cultivars are susceptible to root knot nematode parasitism, rootstocks resistant to these soil pests provide a sustainable approach to maintain grapevine production. Currently, most of the commercially available root knot nematode resistant rootstocks are highly vigorous and take up excess potassium, which reduces wine quality. As a result, there is a pressing need to breed new root knot nematode resistant rootstocks, which have no impact on wine quality. To develop molecular markers that predict root knot nematode resistance for marker assisted breeding, a genetic approach was employed to identify a root knot nematode resistance locus in grapevine. To this end, a Meloidogyne javanica resistant Vitis cinerea accession was crossed to a susceptible Vitis vinifera cultivar Riesling and results from screening the F1 individuals support a model that root knot nematode resistance, is conferred by a single dominant allele, referred as MELOIDOGYNE JAVANICA RESISTANCE1 (MJR1). Further, MJR1 resistance appears to be mediated by a hypersensitive response that occurs in the root apical meristem. Single nucleotide polymorphisms (SNPs) were identified using genotyping-by-sequencing and results from association and genetic mapping identified the MJR1 locus, which is located on chromosome 18 in the Vitis cinerea accession. Validation of the SNPs linked to the MJR1 locus using a Sequenom MassARRAY platform found that only 50% could be validated. The validated SNPs that flank and co-segregate with the MJR1 locus can be used for marker-assisted selection for Meloidogyne javanica resistance in grapevine. PMID:29462210

  5. SNP markers tightly linked to root knot nematode resistance in grapevine (Vitis cinerea) identified by a genotyping-by-sequencing approach followed by Sequenom MassARRAY validation.

    PubMed

    Smith, Harley M; Smith, Brady P; Morales, Norma B; Moskwa, Sam; Clingeleffer, Peter R; Thomas, Mark R

    2018-01-01

    Plant parasitic nematodes, including root knot nematode Meloidogyne species, cause extensive damage to agriculture and horticultural crops. As Vitis vinifera cultivars are susceptible to root knot nematode parasitism, rootstocks resistant to these soil pests provide a sustainable approach to maintain grapevine production. Currently, most of the commercially available root knot nematode resistant rootstocks are highly vigorous and take up excess potassium, which reduces wine quality. As a result, there is a pressing need to breed new root knot nematode resistant rootstocks, which have no impact on wine quality. To develop molecular markers that predict root knot nematode resistance for marker assisted breeding, a genetic approach was employed to identify a root knot nematode resistance locus in grapevine. To this end, a Meloidogyne javanica resistant Vitis cinerea accession was crossed to a susceptible Vitis vinifera cultivar Riesling and results from screening the F1 individuals support a model that root knot nematode resistance, is conferred by a single dominant allele, referred as MELOIDOGYNE JAVANICA RESISTANCE1 (MJR1). Further, MJR1 resistance appears to be mediated by a hypersensitive response that occurs in the root apical meristem. Single nucleotide polymorphisms (SNPs) were identified using genotyping-by-sequencing and results from association and genetic mapping identified the MJR1 locus, which is located on chromosome 18 in the Vitis cinerea accession. Validation of the SNPs linked to the MJR1 locus using a Sequenom MassARRAY platform found that only 50% could be validated. The validated SNPs that flank and co-segregate with the MJR1 locus can be used for marker-assisted selection for Meloidogyne javanica resistance in grapevine.

  6. The validity of the multi-informant approach to assessing child and adolescent mental health.

    PubMed

    De Los Reyes, Andres; Augenstein, Tara M; Wang, Mo; Thomas, Sarah A; Drabick, Deborah A G; Burgers, Darcy E; Rabinowitz, Jill

    2015-07-01

    Child and adolescent patients may display mental health concerns within some contexts and not others (e.g., home vs. school). Thus, understanding the specific contexts in which patients display concerns may assist mental health professionals in tailoring treatments to patients' needs. Consequently, clinical assessments often include reports from multiple informants who vary in the contexts in which they observe patients' behavior (e.g., patients, parents, teachers). Previous meta-analyses indicate that informants' reports correlate at low-to-moderate magnitudes. However, is it valid to interpret low correspondence among reports as indicating that patients display concerns in some contexts and not others? We meta-analyzed 341 studies published between 1989 and 2014 that reported cross-informant correspondence estimates, and observed low-to-moderate correspondence (mean internalizing: r = .25; mean externalizing: r = .30; mean overall: r = .28). Informant pair, mental health domain, and measurement method moderated magnitudes of correspondence. These robust findings have informed the development of concepts for interpreting multi-informant assessments, allowing researchers to draw specific predictions about the incremental and construct validity of these assessments. In turn, we critically evaluated research on the incremental and construct validity of the multi-informant approach to clinical child and adolescent assessment. In so doing, we identify crucial gaps in knowledge for future research, and provide recommendations for "best practices" in using and interpreting multi-informant assessments in clinical work and research. This article has important implications for developing personalized approaches to clinical assessment, with the goal of informing techniques for tailoring treatments to target the specific contexts where patients display concerns. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  7. Genomic selection across multiple breeding cycles in applied bread wheat breeding.

    PubMed

    Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann

    2016-06-01

    We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.

  8. Validating spatiotemporal predictions of an important pest of small grains.

    PubMed

    Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J

    2015-01-01

    Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.

  9. A Metabolomic Approach to Target Compounds from the Asteraceae Family for Dual COX and LOX Inhibition

    PubMed Central

    Chagas-Paula, Daniela A.; Zhang, Tong; Da Costa, Fernando B.; Edrada-Ebel, RuAngelie

    2015-01-01

    The application of metabolomics in phytochemical analysis is an innovative strategy for targeting active compounds from a complex plant extract. Species of the Asteraceae family are well-known to exhibit potent anti-inflammatory (AI) activity. Dual inhibition of the enzymes COX-1 and 5-LOX is essential for the treatment of several inflammatory diseases, but there is not much investigation reported in the literature for natural products. In this study, 57 leaf extracts (EtOH-H2O 7:3, v/v) from different genera and species of the Asteraceae family were tested against COX-1 and 5-LOX while HPLC-ESI-HRMS analysis of the extracts indicated high diversity in their chemical compositions. Using O2PLS-DA (R2 > 0.92; VIP > 1 and positive Y-correlation values), dual inhibition potential of low-abundance metabolites was determined. The O2PLS-DA results exhibited good validation values (cross-validation = Q2 > 0.7 and external validation = P2 > 0.6) with 0% of false positive predictions. The metabolomic approach determined biomarkers for the required biological activity and detected active compounds in the extracts displaying unique mechanisms of action. In addition, the PCA data also gave insights on the chemotaxonomy of the family Asteraceae across its diverse range of genera and tribes. PMID:26184333

  10. An integrated approach for identifying wrongly labelled samples when performing classification in microarray data.

    PubMed

    Leung, Yuk Yee; Chang, Chun Qi; Hung, Yeung Sam

    2012-01-01

    Using hybrid approach for gene selection and classification is common as results obtained are generally better than performing the two tasks independently. Yet, for some microarray datasets, both classification accuracy and stability of gene sets obtained still have rooms for improvement. This may be due to the presence of samples with wrong class labels (i.e. outliers). Outlier detection algorithms proposed so far are either not suitable for microarray data, or only solve the outlier detection problem on their own. We tackle the outlier detection problem based on a previously proposed Multiple-Filter-Multiple-Wrapper (MFMW) model, which was demonstrated to yield promising results when compared to other hybrid approaches (Leung and Hung, 2010). To incorporate outlier detection and overcome limitations of the existing MFMW model, three new features are introduced in our proposed MFMW-outlier approach: 1) an unbiased external Leave-One-Out Cross-Validation framework is developed to replace internal cross-validation in the previous MFMW model; 2) wrongly labeled samples are identified within the MFMW-outlier model; and 3) a stable set of genes is selected using an L1-norm SVM that removes any redundant genes present. Six binary-class microarray datasets were tested. Comparing with outlier detection studies on the same datasets, MFMW-outlier could detect all the outliers found in the original paper (for which the data was provided for analysis), and the genes selected after outlier removal were proven to have biological relevance. We also compared MFMW-outlier with PRAPIV (Zhang et al., 2006) based on same synthetic datasets. MFMW-outlier gave better average precision and recall values on three different settings. Lastly, artificially flipped microarray datasets were created by removing our detected outliers and flipping some of the remaining samples' labels. Almost all the 'wrong' (artificially flipped) samples were detected, suggesting that MFMW-outlier was sufficiently powerful to detect outliers in high-dimensional microarray datasets.

  11. Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

    PubMed

    Reiss, Philip T

    2015-08-01

    The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Accelerating cross-validation with total variation and its application to super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki

    2017-12-01

    We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  13. The cross-cultural validity of posttraumatic stress disorder: implications for DSM-5.

    PubMed

    Hinton, Devon E; Lewis-Fernández, Roberto

    2011-09-01

    There is considerable debate about the cross-cultural applicability of the posttraumatic stress disorder (PTSD) category as currently specified. Concerns include the possible status of PTSD as a Western culture-bound disorder and the validity of individual items and criteria thresholds. This review examines various types of cross-cultural validity of the PTSD criteria as defined in DSM-IV-TR, and presents options and preliminary recommendations to be considered for DSM-5. Searches were conducted of the mental health literature, particularly since 1994, regarding cultural-, race-, or ethnicity-related factors that might limit the universal applicability of the diagnostic criteria of PTSD in DSM-IV-TR and the possible criteria for DSM-5. Substantial evidence of the cross-cultural validity of PTSD was found. However, evidence of cross-cultural variability in certain areas suggests the need for further research: the relative salience of avoidance/numbing symptoms, the role of the interpretation of trauma-caused symptoms in shaping symptomatology, and the prevalence of somatic symptoms. This review also indicates the need to modify certain criteria, such as the items on distressing dreams and on foreshortened future, to increase their cross-cultural applicability. Text additions are suggested to increase the applicability of the manual across cultural contexts: specifying that cultural syndromes-such as those indicated in the DSM-IV-TR Glossary-may be a prominent part of the trauma response in certain cultures, and that those syndromes may influence PTSD symptom salience and comorbidity. The DSM-IV-TR PTSD category demonstrates various types of validity. Criteria modification and textual clarifications are suggested to further improve its cross-cultural applicability. © 2010 Wiley-Liss, Inc.

  14. Inverse modeling of flow tomography experiments in fractured media

    NASA Astrophysics Data System (ADS)

    Klepikova, Maria; Le Borgne, Tanguy; Bour, Olivier; de Dreuzy, Jean-Raynald

    2014-05-01

    Inverse modeling of fracture hydraulic properties and connectivity is a very challenging objective due to the strong heterogeneity of the medium at multiple scales and the scarcity of data. Cross-borehole flowmeter tests, which consist of measuring changes in vertical borehole flows when pumping a neighboring borehole, were shown to be an efficient technique to provide information on the properties of the flow zones that connect borehole pairs (Paillet, 1998, Le Borgne et al., 2007). The interpretation of such experiments may, however, be quite uncertain when multiple connections exist. We propose the flow tomography approach (i.e., sequential cross-borehole flowmeter tests) to characterize the connectivity and transmissivity of preferential permeable flow paths in fractured aquifers (Klepikova et al., 2013). An inverse model approach is developed to estimate log-transformed transmissivity values of hydraulically active fractures between the pumping and observation wells by inverting cross-borehole flow and water level data. Here a simplified discrete fracture network approach that highlights main connectivity structures is used. This conceptual model attempts to reproduce fracture network connectivity without taking fracture geometry (length, orientation, dip) into account. We demonstrate that successively exchanging the roles of pumping and observation boreholes improves the quality of available information and reduces the under-determination of the problem. The inverse method is validated for several synthetic flow scenarios. It is shown to provide a good estimation of connectivity patterns and transmissivities of main flow paths. It also allows the estimation of the transmissivity of fractures that connect the flow paths but do not cross the boreholes, although the associated uncertainty may be high for some geometries. The results of this investigation encourage the application of flow tomography to natural fractured aquifers.

  15. On mass and momentum conservation in the variable-parameter Muskingum method

    NASA Astrophysics Data System (ADS)

    Reggiani, Paolo; Todini, Ezio; Meißner, Dennis

    2016-12-01

    In this paper we investigate mass and momentum conservation in one-dimensional routing models. To this end we formulate the conservation equations for a finite-dimensional reach and compute individual terms using three standard Saint-Venant (SV) solvers: SOBEK, HEC-RAS and MIKE11. We also employ two different variable-parameter Muskingum (VPM) formulations: the classical Muskingum-Cunge (MC) and the revised, mass-conservative Muskingum-Cunge-Todini (MCT) approach, whereby geometrical cross sections are treated analytically in both cases. We initially compare the three SV solvers for a straight mild-sloping prismatic channel with geometric cross sections and a synthetic hydrograph as boundary conditions against the analytical MC and MCT solutions. The comparison is substantiated by the fact that in this flow regime the conditions for the parabolic equation model solved by MC and MCT are met. Through this intercomparison we show that all approaches have comparable mass and momentum conservation properties, except the MC. Then we extend the MCT to use natural cross sections for a real irregular river channel forced by an observed triple-peak event and compare the results with SOBEK. The model intercomparison demonstrates that the VPM in the form of MCT can be a computationally efficient, fully mass and momentum conservative approach and therefore constitutes a valid alternative to Saint-Venant based flood wave routing for a wide variety of rivers and channels in the world when downstream boundary conditions or hydraulic structures are non-influential.

  16. CROSS-CULTURAL ADAPTATION AND VALIDATION OF THE KOREAN VERSION OF THE CUMBERLAND ANKLE INSTABILITY TOOL.

    PubMed

    Ko, Jupil; Rosen, Adam B; Brown, Cathleen N

    2015-12-01

    The Cumberland Ankle Instability Tool (CAIT) is a valid and reliable patient reported outcome used to assess the presence and severity of chronic ankle instability (CAI). The CAIT has been cross-culturally adapted into other languages for use in non-English speaking populations. However, there are no valid questionnaires to assess CAI in individuals who speak Korean. The purpose of this study was to translate, cross-culturally adapt, and validate the CAIT, for use in a Korean-speaking population with CAI. Cross-cultural reliability study. The CAIT was cross-culturally adapted into Korean according to accepted guidelines and renamed the Cumberland Ankle Instability Tool-Korean (CAIT-K). Twenty-three participants (12 males, 11 females) who were bilingual in English and Korean were recruited and completed the original and adapted versions to assess agreement between versions. An additional 168 national level Korean athletes (106 male, 62 females; age = 20.3 ± 1.1 yrs), who participated in ≥ 90 minutes of physical activity per week, completed the final version of the CAIT-K twice within 14 days. Their completed questionnaires were assessed for internal consistency, test-retest reliability, criterion validity, and construct validity. For bilingual participants, intra-class correlation coefficients (ICC2,1) between the CAIT and the CAIT-K for test-retest reliability were 0.95 (SEM=1.83) and 0.96 (SEM=1.50) in right and left limbs, respectively. The Cronbach's alpha coefficients were 0.92 and 0.90 for the CAIT-K in right and left limbs, respectively. For native Korean speakers, the CAIT-K had high internal consistency (Cronbach's α=0.89) and intra-class correlation coefficient (ICC2,1 = 0.94, SEM=1.72), correlation with the physical component score (rho=0.70, p = 0.001) of the Short-Form Health Survey (SF-36), and the Kaiser-Meyer-Olkin score was 0.87. The original CAIT was translated, cross-culturally adapted, and validated from English to Korean. The CAIT-K appears to be valid and reliable and could be useful in assessing the Korean speaking population with CAI.

  17. Cross-cultural adaptation and validation of the osteoporosis assessment questionnaire short version (OPAQ-SV) for Chinese osteoporotic fracture females.

    PubMed

    Zhang, Yin-Ping; Wei, Huan-Huan; Wang, Wen; Xia, Ru-Yi; Zhou, Xiao-Ling; Porr, Caroline; Lammi, Mikko

    2016-04-01

    The Osteoporosis Assessment Questionnaire Short Version (OPAQ-SV) was cross-culturally adapted to measure health-related quality of life in Chinese osteoporotic fracture females and then validated in China for its psychometric properties. Cross-cultural adaptation, including translation of the original OPAQ-SV into Mandarin Chinese language, was performed according to published guidelines. Validation of the newly cross-culturally adapted OPAQ-SV was conducted by sampling 234 Chinese osteoporotic fracture females and also a control group of 235 Chinese osteoporotic females without fractures, producing robust content, construct, and discriminant validation results. Major categories of reliability were also met: the Cronbach alpha coefficient was 0.975, indicating good internal consistency; the test-retest reliability was 0.80; and principal component analysis resulted in a 6-factor structure explaining 75.847 % of the total variance. Further, the Comparative Fit Index result was 0.922 following the modified model confirmatory factor analysis, and the chi-squared test was 1.98. The root mean squared error of approximation was 0.078. Moreover, significant differences were revealed between females with fractures and those without fractures across all domains (p < 0.001). Overall, the newly cross-culturally adapted OPAQ-SV appears to possess adequate validity and reliability and may be utilized in clinical trials to assess the health-related quality of life in Chinese osteoporotic fracture females.

  18. Cross-cultural adaption and validation of the Persian version of the SWAL-QOL.

    PubMed

    Tarameshlu, Maryam; Azimi, Amir Reza; Jalaie, Shohreh; Ghelichi, Leila; Ansari, Noureddin Nakhostin

    2017-06-01

    The aim of this study was to translate and cross-culturally adapt the swallowing quality-of-life questionnaire (SWAL-QOL) to Persian language and to determine validity and reliability of the Persian version of the swallow quality-of-life questionnaire (PSWAL-QOL) in the patients with oropharyngeal dysphagia.The cross-sectional survey was designed to translate and cross-culturally adapt SWAL-QOL to Persian language following steps recommended in guideline. A total of 142 patients with dysphagia (mean age = 56.7 ± 12.22 years) were selected by non-probability consecutive sampling method to evaluate construct validity and internal consistency. Thirty patients with dysphagia were completed the PSWAL-QOL 2 weeks later for test-retest reliability.The PSWAL-QOL was favorably accepted with no missing items. The floor effect was ranged 0% to 21% and ceiling effect was ranged 0% to 16%. The construct validity was established via exploratory factor analysis. Internal consistency was confirmed with Cronbach α >0.7 for all scales except eating duration (α = 0.68). The test-retest reliability was excellent with intraclass correlation coefficient (ICC) ≥0.75 for all scales.The SWAL-QOL was cross-culturally adapted to Persian and demonstrated to be a valid and reliable self-report questionnaire to measure the impact of dysphagia on the quality-of-life in the Persian patients with oropharyngeal dysphagia.

  19. Cross-cultural adaption and validation of the Persian version of the SWAL-QOL

    PubMed Central

    Tarameshlu, Maryam; Azimi, Amir Reza; Jalaie, Shohreh; Ghelichi, Leila; Ansari, Noureddin Nakhostin

    2017-01-01

    Abstract The aim of this study was to translate and cross-culturally adapt the swallowing quality-of-life questionnaire (SWAL-QOL) to Persian language and to determine validity and reliability of the Persian version of the swallow quality-of-life questionnaire (PSWAL-QOL) in the patients with oropharyngeal dysphagia. The cross-sectional survey was designed to translate and cross-culturally adapt SWAL-QOL to Persian language following steps recommended in guideline. A total of 142 patients with dysphagia (mean age = 56.7 ± 12.22 years) were selected by non-probability consecutive sampling method to evaluate construct validity and internal consistency. Thirty patients with dysphagia were completed the PSWAL-QOL 2 weeks later for test–retest reliability. The PSWAL-QOL was favorably accepted with no missing items. The floor effect was ranged 0% to 21% and ceiling effect was ranged 0% to 16%. The construct validity was established via exploratory factor analysis. Internal consistency was confirmed with Cronbach α >0.7 for all scales except eating duration (α = 0.68). The test–retest reliability was excellent with intraclass correlation coefficient (ICC) ≥0.75 for all scales. The SWAL-QOL was cross-culturally adapted to Persian and demonstrated to be a valid and reliable self-report questionnaire to measure the impact of dysphagia on the quality-of-life in the Persian patients with oropharyngeal dysphagia. PMID:28658118

  20. Certification in Structural Health Monitoring Systems

    DTIC Science & Technology

    2011-09-01

    validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE

  1. Comparison and validation of injury risk classifiers for advanced automated crash notification systems.

    PubMed

    Kusano, Kristofer; Gabler, Hampton C

    2014-01-01

    The odds of death for a seriously injured crash victim are drastically reduced if he or she received care at a trauma center. Advanced automated crash notification (AACN) algorithms are postcrash safety systems that use data measured by the vehicles during the crash to predict the likelihood of occupants being seriously injured. The accuracy of these models are crucial to the success of an AACN. The objective of this study was to compare the predictive performance of competing injury risk models and algorithms: logistic regression, random forest, AdaBoost, naïve Bayes, support vector machine, and classification k-nearest neighbors. This study compared machine learning algorithms to the widely adopted logistic regression modeling approach. Machine learning algorithms have not been commonly studied in the motor vehicle injury literature. Machine learning algorithms may have higher predictive power than logistic regression, despite the drawback of lacking the ability to perform statistical inference. To evaluate the performance of these algorithms, data on 16,398 vehicles involved in non-rollover collisions were extracted from the NASS-CDS. Vehicles with any occupants having an Injury Severity Score (ISS) of 15 or greater were defined as those requiring victims to be treated at a trauma center. The performance of each model was evaluated using cross-validation. Cross-validation assesses how a model will perform in the future given new data not used for model training. The crash ΔV (change in velocity during the crash), damage side (struck side of the vehicle), seat belt use, vehicle body type, number of events, occupant age, and occupant sex were used as predictors in each model. Logistic regression slightly outperformed the machine learning algorithms based on sensitivity and specificity of the models. Previous studies on AACN risk curves used the same data to train and test the power of the models and as a result had higher sensitivity compared to the cross-validated results from this study. Future studies should account for future data; for example, by using cross-validation or risk presenting optimistic predictions of field performance. Past algorithms have been criticized for relying on age and sex, being difficult to measure by vehicle sensors, and inaccuracies in classifying damage side. The models with accurate damage side and including age/sex did outperform models with less accurate damage side and without age/sex, but the differences were small, suggesting that the success of AACN is not reliant on these predictors.

  2. [A short form of the positions on nursing diagnosis scale: development and psychometric testing].

    PubMed

    Romero-Sánchez, José Manuel; Paloma-Castro, Olga; Paramio-Cuevas, Juan Carlos; Pastor-Montero, Sonia María; O'Ferrall-González, Cristina; Gabaldón-Bravo, Eva Maria; González-Domínguez, Maria Eugenia; Castro-Yuste, Cristina; Frandsen, Anna J; Martínez-Sabater, Antonio

    2013-06-01

    The Positions on Nursing Diagnosis (PND) is a scale that uses the semantic differential technique to measure nurses' attitudes towards the nursing diagnosis concept. The aim of this study was to develop a shortened form of the Spanish version of this scale and evaluate its psychometric properties and efficiency. A double theoretical-empirical approach was used to obtain a short form of the PND, the PND-7-SV, which would be equivalent to the original. Using a cross-sectional survey design, the reliability (internal consistency and test-retest reliability), construct (exploratory factor analysis, known-groups technique and discriminant validity) and criterion-related validity (concurrent validity), sensitivity to change and efficiency of the PND-7-SV were assessed in a sample of 476 Spanish nursing students. The results endorsed the utility of the PND-7-SV to measure attitudes toward nursing diagnosis in an equivalent manner to the complete form of the scale and in a shorter time.

  3. Approaches for advancing scientific understanding of macrosystems

    USGS Publications Warehouse

    Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.

    2014-01-01

    The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.

  4. The Content Validity of a Chemotherapy-Induced Peripheral Neuropathy Patient-Reported Outcome Measure

    PubMed Central

    Lavoie Smith, Ellen M.; Haupt, Rylie; Kelly, James P.; Lee, Deborah; Kanzawa-Lee, Grace; Knoerl, Robert; Bridges, Celia; Alberti, Paola; Prasertsri, Nusara; Donohoe, Clare

    2018-01-01

    Purpose/Objectives To test the content validity of a 16-item version of the European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire–Chemotherapy-Induced Peripheral Neuropathy (QLQ-CIPN20). Research Approach Cross-sectional, prospective, qualitative design. Setting Six outpatient oncology clinics within the University of Michigan Health System’s comprehensive cancer center in Ann Arbor. Participants 25 adults with multiple myeloma or breast, gynecologic, gastrointestinal, or head and neck malignancies experiencing peripheral neuropathy caused by neurotoxic chemotherapy. Methodologic Approach Cognitive interviewing methodology was used to evaluate the content validity of a 16-item version of the QLQ-CIPN20 instrument. Findings Minor changes were made to three questions to enhance readability. Twelve questions were revised to define unfamiliar terminology, clarify the location of neuropathy, and emphasize important aspects. One question was deleted because of clinical and conceptual redundancy with other items, as well as concerns regarding generalizability and social desirability. Interpretation Cognitive interviewing methodology revealed inconsistencies between patients’ understanding and researchers’ intent, along with points that required clarification to avoid misunderstanding. Implications for Nursing Patients’ interpretations of the instrument’s items were inconsistent with the intended meanings of the questions. One item was dropped and others were revised, resulting in greater consistency in how patients, clinicians, and researchers interpreted the items’ meanings and improving the instrument’s content validity. Following additional revision and psychometric testing, the QLQ-CIPN20 could evolve into a gold-standard CIPN patient-reported outcome measure. PMID:28820525

  5. Quantitative and Systems Pharmacology. 1. In Silico Prediction of Drug-Target Interactions of Natural Products Enables New Targeted Cancer Therapy.

    PubMed

    Fang, Jiansong; Wu, Zengrui; Cai, Chuipu; Wang, Qi; Tang, Yun; Cheng, Feixiong

    2017-11-27

    Natural products with diverse chemical scaffolds have been recognized as an invaluable source of compounds in drug discovery and development. However, systematic identification of drug targets for natural products at the human proteome level via various experimental assays is highly expensive and time-consuming. In this study, we proposed a systems pharmacology infrastructure to predict new drug targets and anticancer indications of natural products. Specifically, we reconstructed a global drug-target network with 7,314 interactions connecting 751 targets and 2,388 natural products and built predictive network models via a balanced substructure-drug-target network-based inference approach. A high area under receiver operating characteristic curve of 0.96 was yielded for predicting new targets of natural products during cross-validation. The newly predicted targets of natural products (e.g., resveratrol, genistein, and kaempferol) with high scores were validated by various literature studies. We further built the statistical network models for identification of new anticancer indications of natural products through integration of both experimentally validated and computationally predicted drug-target interactions of natural products with known cancer proteins. We showed that the significantly predicted anticancer indications of multiple natural products (e.g., naringenin, disulfiram, and metformin) with new mechanism-of-action were validated by various published experimental evidence. In summary, this study offers powerful computational systems pharmacology approaches and tools for the development of novel targeted cancer therapies by exploiting the polypharmacology of natural products.

  6. Calibration and validation of wearable monitors.

    PubMed

    Bassett, David R; Rowlands, Alex; Trost, Stewart G

    2012-01-01

    Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.

  7. Reliability and Construct Validity of the Psychopathic Personality Inventory-Revised in a Swedish Non-Criminal Sample - A Multimethod Approach including Psychophysiological Correlates of Empathy for Pain.

    PubMed

    Sörman, Karolina; Nilsonne, Gustav; Howner, Katarina; Tamm, Sandra; Caman, Shilan; Wang, Hui-Xin; Ingvar, Martin; Edens, John F; Gustavsson, Petter; Lilienfeld, Scott O; Petrovic, Predrag; Fischer, Håkan; Kristiansson, Marianne

    2016-01-01

    Cross-cultural investigation of psychopathy measures is important for clarifying the nomological network surrounding the psychopathy construct. The Psychopathic Personality Inventory-Revised (PPI-R) is one of the most extensively researched self-report measures of psychopathic traits in adults. To date however, it has been examined primarily in North American criminal or student samples. To address this gap in the literature, we examined PPI-R's reliability, construct validity and factor structure in non-criminal individuals (N = 227) in Sweden, using a multimethod approach including psychophysiological correlates of empathy for pain. PPI-R construct validity was investigated in subgroups of participants by exploring its degree of overlap with (i) the Psychopathy Checklist: Screening Version (PCL:SV), (ii) self-rated empathy and behavioral and physiological responses in an experiment on empathy for pain, and (iii) additional self-report measures of alexithymia and trait anxiety. The PPI-R total score was significantly associated with PCL:SV total and factor scores. The PPI-R Coldheartedness scale demonstrated significant negative associations with all empathy subscales and with rated unpleasantness and skin conductance responses in the empathy experiment. The PPI-R higher order Self-Centered Impulsivity and Fearless Dominance dimensions were associated with trait anxiety in opposite directions (positively and negatively, respectively). Overall, the results demonstrated solid reliability (test-retest and internal consistency) and promising but somewhat mixed construct validity for the Swedish translation of the PPI-R.

  8. Reliability and Construct Validity of the Psychopathic Personality Inventory-Revised in a Swedish Non-Criminal Sample – A Multimethod Approach including Psychophysiological Correlates of Empathy for Pain

    PubMed Central

    Sörman, Karolina; Nilsonne, Gustav; Howner, Katarina; Tamm, Sandra; Caman, Shilan; Wang, Hui-Xin; Ingvar, Martin; Edens, John F.; Gustavsson, Petter; Lilienfeld, Scott O; Petrovic, Predrag; Fischer, Håkan; Kristiansson, Marianne

    2016-01-01

    Cross-cultural investigation of psychopathy measures is important for clarifying the nomological network surrounding the psychopathy construct. The Psychopathic Personality Inventory-Revised (PPI-R) is one of the most extensively researched self-report measures of psychopathic traits in adults. To date however, it has been examined primarily in North American criminal or student samples. To address this gap in the literature, we examined PPI-R’s reliability, construct validity and factor structure in non-criminal individuals (N = 227) in Sweden, using a multimethod approach including psychophysiological correlates of empathy for pain. PPI-R construct validity was investigated in subgroups of participants by exploring its degree of overlap with (i) the Psychopathy Checklist: Screening Version (PCL:SV), (ii) self-rated empathy and behavioral and physiological responses in an experiment on empathy for pain, and (iii) additional self-report measures of alexithymia and trait anxiety. The PPI-R total score was significantly associated with PCL:SV total and factor scores. The PPI-R Coldheartedness scale demonstrated significant negative associations with all empathy subscales and with rated unpleasantness and skin conductance responses in the empathy experiment. The PPI-R higher order Self-Centered Impulsivity and Fearless Dominance dimensions were associated with trait anxiety in opposite directions (positively and negatively, respectively). Overall, the results demonstrated solid reliability (test-retest and internal consistency) and promising but somewhat mixed construct validity for the Swedish translation of the PPI-R. PMID:27300292

  9. Calibration of the Dutch-Flemish PROMIS Pain Behavior item bank in patients with chronic pain.

    PubMed

    Crins, M H P; Roorda, L D; Smits, N; de Vet, H C W; Westhovens, R; Cella, D; Cook, K F; Revicki, D; van Leeuwen, J; Boers, M; Dekker, J; Terwee, C B

    2016-02-01

    The aims of the current study were to calibrate the item parameters of the Dutch-Flemish PROMIS Pain Behavior item bank using a sample of Dutch patients with chronic pain and to evaluate cross-cultural validity between the Dutch-Flemish and the US PROMIS Pain Behavior item banks. Furthermore, reliability and construct validity of the Dutch-Flemish PROMIS Pain Behavior item bank were evaluated. The 39 items in the bank were completed by 1042 Dutch patients with chronic pain. To evaluate unidimensionality, a one-factor confirmatory factor analysis (CFA) was performed. A graded response model (GRM) was used to calibrate the items. To evaluate cross-cultural validity, Differential item functioning (DIF) for language (Dutch vs. English) was evaluated. Reliability of the item bank was also examined and construct validity was studied using several legacy instruments, e.g. the Roland Morris Disability Questionnaire. CFA supported the unidimensionality of the Dutch-Flemish PROMIS Pain Behavior item bank (CFI = 0.960, TLI = 0.958), the data also fit the GRM, and demonstrated good coverage across the pain behavior construct (threshold parameters range: -3.42 to 3.54). Analysis showed good cross-cultural validity (only six DIF items), reliability (Cronbach's α = 0.95) and construct validity (all correlations ≥0.53). The Dutch-Flemish PROMIS Pain Behavior item bank was found to have good cross-cultural validity, reliability and construct validity. The development of the Dutch-Flemish PROMIS Pain Behavior item bank will serve as the basis for Dutch-Flemish PROMIS short forms and computer adaptive testing (CAT). © 2015 European Pain Federation - EFIC®

  10. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms.

    PubMed

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.

  11. Artificial intelligence techniques: An efficient new approach to challenge the assessment of complex clinical fields such as airway clearance techniques in patients with cystic fibrosis?

    PubMed

    Slavici, Titus; Almajan, Bogdan

    2013-04-01

    To construct an artificial intelligence application to assist untrained physiotherapists in determining the appropriate physiotherapy exercises to improve the quality of life of patients with cystic fibrosis. A total of 42 children (21 boys and 21 girls), age range 6-18 years, participated in a clinical survey between 2001 and 2005. Data collected during the clinical survey were entered into a neural network in order to correlate the health state indicators of the patients and the type of physiotherapy exercise to be followed. Cross-validation of the network was carried out by comparing the health state indicators achieved after following a certain physiotherapy exercise and the health state indicators predicted by the network. The lifestyle and health state indicators of the survey participants improved. The network predicted the health state indicators of the participants with an accuracy of 93%. The results of the cross-validation test were within the error margins of the real-life indicators. Using data on the clinical state of individuals with cystic fibrosis, it is possible to determine the most effective type of physiotherapy exercise for improving overall health state indicators.

  12. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  13. Comparative assessment of three standardized robotic surgery training methods.

    PubMed

    Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C

    2013-10-01

    To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.

  14. Attrition from an Adolescent Addiction Treatment Program: A Cross Validation.

    ERIC Educational Resources Information Center

    Mathisen, Kenneth S.; Meyers, Kathleen

    Treatment attrition is a major problem for programs treating adolescent substance abusers. To isolate and cross validate factors which are predictive of addiction treatment attrition among adolescent substance abusers, screening interview and diagnostic variables from 119 adolescent in-patients were submitted to a discriminant equation analysis.…

  15. 75 FR 80870 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Order Granting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-23

    ... Proposed Rule Change To Eliminate the Validated Cross Trade Entry Functionality December 16, 2010. Pursuant... eliminate the Validated Cross Trade Entry Functionality for Exchange-registered Institutional Brokers. The... Brokers (``Institutional Brokers'') by eliminating the ability of an Institutional Broker to execute...

  16. Cross-Cultural Validation of TEMAS, a Minority Projective Test.

    ERIC Educational Resources Information Center

    Costantino, Giuseppe; And Others

    The theoretical framework and cross-cultural validation of Tell-Me-A-Story (TEMAS), a projective test developed to measure personality development in ethnic minority children, is presented. The TEMAS test consists of 23 chromatic pictures which incorporate the following characteristics: (1) representation of antithetical concepts which the…

  17. A Cross-Validation Approach to Approximate Basis Function Selection of the Stall Flutter Response of a Rectangular Wing in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Vio, Gareth A.; Andrianne, Thomas; azak, Norizham Abudl; Dimitriadis, Grigorios

    2012-01-01

    The stall flutter response of a rectangular wing in a low speed wind tunnel is modelled using a nonlinear difference equation description. Static and dynamic tests are used to select a suitable model structure and basis function. Bifurcation criteria such as the Hopf condition and vibration amplitude variation with airspeed were used to ensure the model was representative of experimentally measured stall flutter phenomena. Dynamic test data were used to estimate model parameters and estimate an approximate basis function.

  18. Advances in the mechanisms and early warning indicators of the postoperative cognitive dysfunction after the extracorporeal circulation.

    PubMed

    Liu, Chao; Han, Jian-ge

    2015-02-01

    The high incidence of postoperative cognitive dysfunction (POCD) after extracorporeal circulation has seriously affected the prognosis and quality of life. Its mechanism may involve the inflammatory response and oxidative stress,the excessive phosphorylation of tau protein, the decreased blood volume and oxygen in the cerebral cortex. Appropriate early warning indicators of POCD after the extracorporeal circulation should be chosen to facilitate the cross validation of the results obtained different technical approaches and thus promote the early diagnosis and treatment of POCD.

  19. The evolution of Crew Resource Management training in commercial aviation

    NASA Technical Reports Server (NTRS)

    Helmreich, R. L.; Merritt, A. C.; Wilhelm, J. A.

    1999-01-01

    In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error.

  20. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    PubMed

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.

  1. Application of Multivariable Analysis and FTIR-ATR Spectroscopy to the Prediction of Properties in Campeche Honey

    PubMed Central

    Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.

    2016-01-01

    Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445

  2. Exciton transport in the PE545 complex: insight from atomistic QM/MM-based quantum master equations and elastic network models

    NASA Astrophysics Data System (ADS)

    Pouyandeh, Sima; Iubini, Stefano; Jurinovich, Sandro; Omar, Yasser; Mennucci, Benedetta; Piazza, Francesco

    2017-12-01

    In this paper, we work out a parameterization of environmental noise within the Haken-Strobl-Reinenker (HSR) model for the PE545 light-harvesting complex, based on atomic-level quantum mechanics/molecular mechanics (QM/MM) simulations. We use this approach to investigate the role of various auto- and cross-correlations in the HSR noise tensor, confirming that site-energy autocorrelations (pure dephasing) terms dominate the noise-induced exciton mobility enhancement, followed by site energy-coupling cross-correlations for specific triplets of pigments. Interestingly, several cross-correlations of the latter kind, together with coupling-coupling cross-correlations, display clear low-frequency signatures in their spectral densities in the 30-70 cm-1 region. These slow components lie at the limits of validity of the HSR approach, which requires that environmental fluctuations be faster than typical exciton transfer time scales. We show that a simple coarse-grained elastic-network-model (ENM) analysis of the PE545 protein naturally spotlights collective normal modes in this frequency range that represent specific concerted motions of the subnetwork of cysteines covalenty linked to the pigments. This analysis strongly suggests that protein scaffolds in light-harvesting complexes are able to express specific collective, low-frequency normal modes providing a fold-rooted blueprint of exciton transport pathways. We speculate that ENM-based mixed quantum classical methods, such as Ehrenfest dynamics, might be promising tools to disentangle the fundamental designing principles of these dynamical processes in natural and artificial light-harvesting structures.

  3. Sentence alignment using feed forward neural network.

    PubMed

    Fattah, Mohamed Abdel; Ren, Fuji; Kuroiwa, Shingo

    2006-12-01

    Parallel corpora have become an essential resource for work in multi lingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross language information retrieval and machine translation applications. In this paper, we present a new approach to align sentences in bilingual parallel corpora based on feed forward neural network classifier. A feature parameter vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuate score, and cognate score values. A set of manually prepared training data has been assigned to train the feed forward neural network. Another set of data was used for testing. Using this new approach, we could achieve an error reduction of 60% over length based approach when applied on English-Arabic parallel documents. Moreover this new approach is valid for any language pair and it is quite flexible approach since the feature parameter vector may contain more/less or different features than that we used in our system such as lexical match feature.

  4. Profiling of components and validated determination of iridoids in Gardenia Jasminoides Ellis fruit by a high-performance-thin-layer- chromatography/mass spectrometry approach.

    PubMed

    Coran, Silvia A; Mulas, Stefano; Vasconi, Alessio

    2014-01-17

    A novel method was set up with the aim to obtain a simultaneous cross comparative evaluation of different Gardenia Jasminoides Ellis fruits by the HPTLC fingerprint approach. The main components among the iridoid, hydroxycinnamic derivative and crocin classes were identified by TLC-MS ancillary techniques. The iridoids geniposide, gardenoside and genepin-1-β-d-gentiobioside were also quantitated by densitometric scanning at 240nm. LiChrospher HPTLC Silica gel 60 RP-18 W F254, 20cm×10cm plates with acetonitrile: formic acid 0.1% (40:60 v/v) as the mobile phase was used. The method was validated giving rise to a dependable and high throughput procedure well suited to routine applications. Iridoids were quantified in the range of 240-1140ng with RSD of repeatability and intermediate precision between 0.9-2.5% and accuracy with bias 1.6-2.6%. The method was tested on six commercial Gardenia Jasminoides fruit samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals

    PubMed Central

    Castañón–Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo

    2015-01-01

    The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi–Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information. PMID:26633417

  6. A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals.

    PubMed

    Castañón-Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo

    2015-12-02

    The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi-Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information.

  7. Multiplex PCR method for use in real-time PCR for identification of fish fillets from grouper (Epinephelus and Mycteroperca species) and common substitute species.

    PubMed

    Trotta, Michele; Schönhuth, Susana; Pepe, Tiziana; Cortesi, M Luisa; Puyet, Antonio; Bautista, José M

    2005-03-23

    Mitochondrial 16S rRNA sequences from morphological validated grouper (Epinephelus aeneus, E. caninus, E. costae, and E. marginatus; Mycteroperca fusca and M. rubra), Nile perch (Lates niloticus), and wreck fish (Polyprion americanus) were used to develop an analytical system for group diagnosis based on two alternative Polymerase Chain Reaction (PCR) approaches. The first includes conventional multiplex PCR in which electrophoretic migration of different sizes of bands allowed identification of the fish species. The second approach, involving real-time PCR, produced a single amplicon from each species that showed different Tm values allowing the fish groups to be directly identified. Real-time PCR allows the quick differential diagnosis of the three groups of species and high-throughput screening of multiple samples. Neither PCR system cross-reacted with DNA samples from 41 common marketed fish species, thus conforming to standards for species validation. The use of these two PCR-based methods makes it now possible to discriminate grouper from substitute fish species.

  8. Cross sections for electron impact excitation of the C 1Π and D 1Σ+ electronic states in N2O

    NASA Astrophysics Data System (ADS)

    Kawahara, H.; Suzuki, D.; Kato, H.; Hoshino, M.; Tanaka, H.; Ingólfsson, O.; Campbell, L.; Brunger, M. J.

    2009-09-01

    Differential and integral cross sections for electron-impact excitation of the dipole-allowed C Π1 and D Σ1+ electronic states of nitrous oxide have been measured. The differential cross sections were determined by analysis of normalized energy-loss spectra obtained using a crossed-beam apparatus at six electron energies in the range 15-200 eV. Integral cross sections were subsequently derived from these data. The present work was undertaken in order to check both the validity of the only other comprehensive experimental study into these excitation processes [Marinković et al., J. Phys. B 32, 1949 (1998)] and to extend the energy range of those data. Agreement with the earlier data, particularly at the lower common energies, was typically found to be fair. In addition, the BEf-scaling approach [Kim, J. Chem. Phys. 126, 064305 (2007)] is used to calculate integral cross sections for the C Π1 and D Σ1+ states, from their respective thresholds to 5000 eV. In general, good agreement is found between the experimental integral cross sections and those calculated within the BEf-scaling paradigm, the only exception being at the lowest energies of this study. Finally, optical oscillator strengths, also determined as a part of the present investigations, were found to be in fair accordance with previous corresponding determinations.

  9. Improving human activity recognition and its application in early stroke diagnosis.

    PubMed

    Villar, José R; González, Silvia; Sedano, Javier; Chira, Camelia; Trejo-Gabriel-Galan, Jose M

    2015-06-01

    The development of efficient stroke-detection methods is of significant importance in today's society due to the effects and impact of stroke on health and economy worldwide. This study focuses on Human Activity Recognition (HAR), which is a key component in developing an early stroke-diagnosis tool. An overview of the proposed global approach able to discriminate normal resting from stroke-related paralysis is detailed. The main contributions include an extension of the Genetic Fuzzy Finite State Machine (GFFSM) method and a new hybrid feature selection (FS) algorithm involving Principal Component Analysis (PCA) and a voting scheme putting the cross-validation results together. Experimental results show that the proposed approach is a well-performing HAR tool that can be successfully embedded in devices.

  10. Model selection for anomaly detection

    NASA Astrophysics Data System (ADS)

    Burnaev, E.; Erofeev, P.; Smolyakov, D.

    2015-12-01

    Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

  11. On the importance of full-dimensionality in low-energy molecular scattering calculations

    PubMed Central

    Faure, Alexandre; Jankowski, Piotr; Stoecklin, Thierry; Szalewicz, Krzysztof

    2016-01-01

    Scattering of H2 on CO is of great importance in astrophysics and also is a benchmark system for comparing theory to experiment. We present here a new 6-dimensional potential energy surface for the ground electronic state of H2-CO with an estimated uncertainty of about 0.6 cm−1 in the global minimum region, several times smaller than achieved earlier. This potential has been used in nearly exact 6-dimensional quantum scattering calculations to compute state-to-state cross-sections measured in low-energy crossed-beam experiments. Excellent agreement between theory and experiment has been achieved in all cases. We also show that the fully 6-dimensional approach is not needed with the current accuracy of experimental data since an equally good agreement with experiment was obtained using only a 4-dimensional treatment, which validates the rigid-rotor approach widely used in scattering calculations. This finding, which disagrees with some literature statements, is important since for larger systems full-dimensional scattering calculations are currently not possible. PMID:27333870

  12. A novel interpolation approach for the generation of 3D-geometric digital bone models from image stacks

    PubMed Central

    Mittag, U.; Kriechbaumer, A.; Rittweger, J.

    2017-01-01

    The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415

  13. Joint passive radar tracking and target classification using radar cross section

    NASA Astrophysics Data System (ADS)

    Herman, Shawn M.

    2004-01-01

    We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.

  14. Joint passive radar tracking and target classification using radar cross section

    NASA Astrophysics Data System (ADS)

    Herman, Shawn M.

    2003-12-01

    We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.

  15. Dysthymia in a cross-cultural perspective.

    PubMed

    Gureje, Oye

    2011-01-01

    Dysthymia is a relatively less-studied condition within the spectrum of depressive disorders. New and important information about its status has emerged in recent scientific literature. This review highlights some of the findings of that literature. Even though studies addressing the cross-cultural validity of dysthymia are being awaited, results of studies using comparable ascertainment procedures suggest that the lifetime and 12-month estimates of the condition may be higher in high-income than in low and middle-income countries. However, the disorder is associated with elevated risks of suicidal outcomes and comparable levels of disability whereever it occurs. Dysthymia commonly carries a worse prognosis than major depressive disorder and comparable or worse clinical outcome than other forms of chronic depression. Whereas there is some evidence that psychotherapy may be less effective than pharmacotherapy in the treatment of dysthymia, the best treatment approach is one that combines both forms of treatment. Dysthymia is a condition of considerable public health importance. Our current understanding suggests that it should receive more clinical and research attention. Specifically, the development of better treatment approaches, especially those that can be implemented in diverse populations, deserves research attention.

  16. Three-dimensional analysis of flow-chemical interaction within a single square channel of a lean NO x trap catalyst.

    PubMed

    Fornarelli, Francesco; Dadduzio, Ruggiero; Torresi, Marco; Camporeale, Sergio Mario; Fortunato, Bernardo

    2018-02-01

    A fully 3D unsteady Computational Fluid Dynamics (CFD) approach coupled with heterogeneous reaction chemistry is presented in order to study the behavior of a single square channel as part of a Lean [Formula: see text] Traps. The reliability of the numerical tool has been validated against literature data considering only active BaO site. Even though the input/output performance of such catalyst has been well known, here the spatial distribution within a single channel is investigated in details. The square channel geometry influences the flow field and the catalyst performance being the flow velocity distribution on the cross section non homogeneous. The mutual interaction between the flow and the active catalyst walls influences the spatial distribution of the volumetric species. Low velocity regions near the square corners and transversal secondary flows are shown in several cross-sections along the streamwise direction at different instants. The results shed light on the three-dimensional characteristic of both the flow field and species distribution within a single square channel of the catalyst with respect to 0-1D approaches.

  17. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases

    PubMed Central

    Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature. PMID:26735851

  18. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases.

    PubMed

    Berenstein, Ariel José; Magariños, María Paula; Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature.

  19. Cross-Cultural Adaptation and Initial Validation of the Stroke-Specific Quality of Life Scale into the Yoruba Language

    ERIC Educational Resources Information Center

    Akinpelu, Aderonke O.; Odetunde, Marufat O.; Odole, Adesola C.

    2012-01-01

    Stroke-Specific Quality of Life 2.0 (SS-QoL 2.0) scale is used widely and has been cross-culturally adapted to many languages. This study aimed at the cross-cultural adaptation of SS-QoL 2.0 to Yoruba, the indigenous language of south-western Nigeria, and to carry out an initial investigation on its validity. English SS-QoL 2.0 was first adapted…

  20. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    PubMed

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  1. Comparing flood loss models of different complexity

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  2. A novel method for expediting the development of patient-reported outcome measures and an evaluation across several populations

    PubMed Central

    Garrard, Lili; Price, Larry R.; Bott, Marjorie J.; Gajewski, Byron J.

    2016-01-01

    Item response theory (IRT) models provide an appropriate alternative to the classical ordinal confirmatory factor analysis (CFA) during the development of patient-reported outcome measures (PROMs). Current literature has identified the assessment of IRT model fit as both challenging and underdeveloped (Sinharay & Johnson, 2003; Sinharay, Johnson, & Stern, 2006). This study evaluates the performance of Ordinal Bayesian Instrument Development (OBID), a Bayesian IRT model with a probit link function approach, through applications in two breast cancer-related instrument development studies. The primary focus is to investigate an appropriate method for comparing Bayesian IRT models in PROMs development. An exact Bayesian leave-one-out cross-validation (LOO-CV) approach (Vehtari & Lampinen, 2002) is implemented to assess prior selection for the item discrimination parameter in the IRT model and subject content experts’ bias (in a statistical sense and not to be confused with psychometric bias as in differential item functioning) toward the estimation of item-to-domain correlations. Results support the utilization of content subject experts’ information in establishing evidence for construct validity when sample size is small. However, the incorporation of subject experts’ content information in the OBID approach can be sensitive to the level of expertise of the recruited experts. More stringent efforts need to be invested in the appropriate selection of subject experts to efficiently use the OBID approach and reduce potential bias during PROMs development. PMID:27667878

  3. A novel method for expediting the development of patient-reported outcome measures and an evaluation across several populations.

    PubMed

    Garrard, Lili; Price, Larry R; Bott, Marjorie J; Gajewski, Byron J

    2016-10-01

    Item response theory (IRT) models provide an appropriate alternative to the classical ordinal confirmatory factor analysis (CFA) during the development of patient-reported outcome measures (PROMs). Current literature has identified the assessment of IRT model fit as both challenging and underdeveloped (Sinharay & Johnson, 2003; Sinharay, Johnson, & Stern, 2006). This study evaluates the performance of Ordinal Bayesian Instrument Development (OBID), a Bayesian IRT model with a probit link function approach, through applications in two breast cancer-related instrument development studies. The primary focus is to investigate an appropriate method for comparing Bayesian IRT models in PROMs development. An exact Bayesian leave-one-out cross-validation (LOO-CV) approach (Vehtari & Lampinen, 2002) is implemented to assess prior selection for the item discrimination parameter in the IRT model and subject content experts' bias (in a statistical sense and not to be confused with psychometric bias as in differential item functioning) toward the estimation of item-to-domain correlations. Results support the utilization of content subject experts' information in establishing evidence for construct validity when sample size is small. However, the incorporation of subject experts' content information in the OBID approach can be sensitive to the level of expertise of the recruited experts. More stringent efforts need to be invested in the appropriate selection of subject experts to efficiently use the OBID approach and reduce potential bias during PROMs development.

  4. Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine

    2015-08-01

    We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.

  5. An ensemble-ANFIS based uncertainty assessment model for forecasting multi-scalar standardized precipitation index

    NASA Astrophysics Data System (ADS)

    Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek

    2018-07-01

    Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.

  6. An Evaluation of the Cross-Cultural Validity of Holland's Theory: Career Choices by Workers in India.

    ERIC Educational Resources Information Center

    Leong, Frederick T. L.; Austin, James T.; Sekaran, Uma; Komarraju, Meera

    1998-01-01

    Natives of India (n=172) completed Holland's Vocational Preference Inventory and job satisfaction measures. The inventory did not exhibit high external validity with this population. Congruence, consistency, and differentiation did not predict job or occupational satisfaction, suggesting cross-cultural limits on Holland's theory. (SK)

  7. Psychometric Evaluation of the Exercise Identity Scale among Greek Adults and Cross-Cultural Validity

    ERIC Educational Resources Information Center

    Vlachopoulos, Symeon P.; Kaperoni, Maria; Moustaka, Frederiki C.; Anderson, Dean F.

    2008-01-01

    The present study reported on translating the Exercise Identity Scale (EIS: Anderson & Cychosz, 1994) into Greek and examining its psychometric properties and cross-cultural validity based on U.S. individuals' EIS responses. Using four samples comprising 33, 103, and 647 Greek individuals, including exercisers and nonexercisers, and a similar…

  8. Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth

    ERIC Educational Resources Information Center

    Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…

  9. Caregivers' Agreement and Validity of Indirect Functional Analysis: A Cross Cultural Evaluation across Multiple Problem Behavior Topographies

    ERIC Educational Resources Information Center

    Virues-Ortega, Javier; Segui-Duran, David; Descalzo-Quero, Alberto; Carnerero, Jose Julio; Martin, Neil

    2011-01-01

    The Motivation Assessment Scale is an aid for hypothesis-driven functional analysis. This study presents its Spanish cross-cultural validation while examining psychometric attributes not yet explored. The study sample comprised 80 primary caregivers of children with autism. Acceptability, scaling assumptions, internal consistency, factor…

  10. Cross-Cultural Validation of the Counselor Burnout Inventory in Hong Kong

    ERIC Educational Resources Information Center

    Shin, Hyojung; Yuen, Mantak; Lee, Jayoung; Lee, Sang Min

    2013-01-01

    This study investigated the cross-cultural validation of the Chinese translation of the Counselor Burnout Inventory (CBI) with a sample of school counselors in Hong Kong. Specifically, this study examined the CBI's factor structure using confirmatory factor analysis and calculated the effect size, to compare burnout scores among the counselors of…

  11. Cross-Validation of a Short Form of the Marlowe-Crowne Social Desirability Scale.

    ERIC Educational Resources Information Center

    Zook, Avery, II; Sipps, Gary J.

    1985-01-01

    Presents a cross-validation of Reynolds' short form of the Marlowe-Crowne Social Desirability Scale (N=233). Researchers administered 13 items as a separate entity, calculated Cronbach's Alpha for each sex, and computed test-retest correlation for one group. Concluded that the short form is a viable alternative. (Author/NRB)

  12. Selection of Marine Corps Drill Instructors

    DTIC Science & Technology

    1980-03-01

    8 4. ., ey- Construction and Cross-Validation Statistics for Drill Instructor School Performance Success Keys...Race, and School Attrition ........... ............................. ... 15 13. Key- Construction and Cross-Validation Statistics for Drill... constructed form, the Alternation Ranking of Series Drill Instruc- tors. In this form, DIs in a Series are ranked from highest to lowest in terms of their

  13. Studying Cross-Cultural Differences in Temperament in the First Year of Life: United States and Italy

    ERIC Educational Resources Information Center

    Montirosso, Rosario; Cozzi, Patrizia; Putnam, Samuel P.; Gartstein, Maria A.; Borgatti, Renato

    2011-01-01

    An Italian translation of the Infant Behavior Questionnaire-Revised (IBQ-R) was developed and evaluated with 110 infants, demonstrating satisfactory internal consistency, discriminant validity, and construct validity in the form of gender and age differences, as well as factorial integrity. Cross-cultural differences were subsequently evaluated…

  14. The Halpern Critical Thinking Assessment and Real-World Outcomes: Cross-National Applications

    ERIC Educational Resources Information Center

    Butler, Heather A.; Dwyer, Christopher P.; Hogan, Michael J.; Franco, Amanda; Rivas, Silvia F.; Saiz, Carlos; Almeida, Leandro S.

    2012-01-01

    The Halpern Critical Thinking Assessment (HCTA) is a reliable measure of critical thinking that has been validated with numerous qualitatively different samples and measures of academic success (Halpern, 2010a). This paper presents several cross-national applications of the assessment, and recent work to expand the validation of the HCTA with…

  15. Quantitative Cross-Species Extrapolation between Humans and Fish: The Case of the Anti-Depressant Fluoxetine

    PubMed Central

    Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.

    2014-01-01

    Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the assessment of the sensitivity of fish to pharmaceuticals, and strengthens the translational power of the cross-species extrapolation. PMID:25338069

  16. Research lessons from implementing a national nursing workforce study.

    PubMed

    Brzostek, T; Brzyski, P; Kózka, M; Squires, A; Przewoźniak, L; Cisek, M; Gajda, K; Gabryś, T; Ogarek, M

    2015-09-01

    National nursing workforce studies are important for evidence-based policymaking to improve nursing human resources globally. Survey instrument translation and contextual adaptation along with level of experience of the research team are key factors that will influence study implementation and results in countries new to health workforce studies. This study's aim was to describe the pre-data collection instrument adaptation challenges when designing the first national nursing workforce study in Poland while participating in the Nurse Forecasting: Human Resources Planning in Nursing project. A descriptive analysis of the pre-data collection phase of the study. Instrument adaptation was conducted through a two-phase content validity indexing process and pilot testing from 2009 to September 2010 in preparation for primary study implementation in December 2010. Means of both content validation phases were compared with pilot study results to assess for significant patterns in the data. The initial review demonstrated that the instrument had poor level of cross-cultural relevance and multiple translation issues. After revising the translation and re-evaluating using the same process, instrument scores improved significantly. Pilot study results showed floor and ceiling effects on relevance score correlations in each phase of the study. The cross-cultural adaptation process was developed specifically for this study and is, therefore, new. It may require additional replication to further enhance the method. The approach used by the Polish team helped identify potential problems early in the study. The critical step improved the rigour of the results and improved comparability for between countries analyses, conserving both money and resources. This approach is advised for cross-cultural adaptation of instruments to be used in national nursing workforce studies. Countries seeking to conduct national nursing workforce surveys to improve nursing human resources policies may find the insights provided by this paper useful to guide national level nursing workforce study implementation. © 2015 International Council of Nurses.

  17. Genomic Prediction of Single Crosses in the Early Stages of a Maize Hybrid Breeding Pipeline.

    PubMed

    Kadam, Dnyaneshwar C; Potts, Sarah M; Bohn, Martin O; Lipka, Alexander E; Lorenz, Aaron J

    2016-09-19

    Prediction of single-cross performance has been a major goal of plant breeders since the beginning of hybrid breeding. Recently, genomic prediction has shown to be a promising approach, but only limited studies have examined the accuracy of predicting single-cross performance. Moreover, no studies have examined the potential of predicting single crosses among random inbreds derived from a series of biparental families, which resembles the structure of germplasm comprising the initial stages of a hybrid maize breeding pipeline. The main objectives of this study were to evaluate the potential of genomic prediction for identifying superior single crosses early in the hybrid breeding pipeline and optimize its application. To accomplish these objectives, we designed and analyzed a novel population of single crosses representing the Iowa Stiff Stalk Synthetic/Non-Stiff Stalk heterotic pattern commonly used in the development of North American commercial maize hybrids. The performance of single crosses was predicted using parental combining ability and covariance among single crosses. Prediction accuracies were estimated using cross-validation and ranged from 0.28 to 0.77 for grain yield, 0.53 to 0.91 for plant height, and 0.49 to 0.94 for staygreen, depending on the number of tested parents of the single cross and genomic prediction method used. The genomic estimated general and specific combining abilities showed an advantage over genomic covariances among single crosses when one or both parents of the single cross were untested. Overall, our results suggest that genomic prediction of single crosses in the early stages of a hybrid breeding pipeline holds great potential to re-design hybrid breeding and increase its efficiency. Copyright © 2016 Author et al.

  18. Cross-Laboratory Analysis of Brain Cell Type Transcriptomes with Applications to Interpretation of Bulk Tissue Data

    PubMed Central

    Toker, Lilah; Rocco, Brad; Sibille, Etienne

    2017-01-01

    Establishing the molecular diversity of cell types is crucial for the study of the nervous system. We compiled a cross-laboratory database of mouse brain cell type-specific transcriptomes from 36 major cell types from across the mammalian brain using rigorously curated published data from pooled cell type microarray and single-cell RNA-sequencing (RNA-seq) studies. We used these data to identify cell type-specific marker genes, discovering a substantial number of novel markers, many of which we validated using computational and experimental approaches. We further demonstrate that summarized expression of marker gene sets (MGSs) in bulk tissue data can be used to estimate the relative cell type abundance across samples. To facilitate use of this expanding resource, we provide a user-friendly web interface at www.neuroexpresso.org. PMID:29204516

  19. Cross-Coupled Control for All-Terrain Rovers

    PubMed Central

    Reina, Giulio

    2013-01-01

    Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors. PMID:23299625

  20. The New Parent Checklist: A Tool to Promote Parental Reflection.

    PubMed

    Keys, Elizabeth M; McNeil, Deborah A; Wallace, Donna A; Bostick, Jason; Churchill, A Jocelyn; Dodd, Maureen M

    To design and establish content and face validity of an evidence-informed tool that promotes parental self-reflection during the transition to parenthood. The New Parent Checklist was developed using a three-phase sequential approach: Phase 1 a scoping review and expert consultation to develop and refine a prototype tool; Phase 2 content analysis of parent focus groups; and Phase 3 assessment of utility in a cross-sectional sample of parents completing the New Parent Checklist and a questionnaire. The initial version of the checklist was considered by experts to contain key information. Focus group participants found it useful, appropriate, and nonjudgmental, and offered suggestions to enhance readability, utility, as well as face and content validity. In the cross-sectional survey, 83% of the participants rated the New Parent Checklist as "helpful" or "very helpful" and 90% found the New Parent Checklist "very easy" to use. Open-ended survey responses included predominantly positive feedback. Notable differences existed for some items based on respondents' first language, age, and sex. Results and feedback from all three phases informed the current version, available for download online. The New Parent Checklist is a comprehensive evidence-informed self-reflective tool with promising content and face validity. Depending on parental characteristics and infant age, certain items of the New Parent Checklist have particular utility but may also require further adaptation and testing. Local resources for information and/or support are included in the tool and could be easily adapted by other regions to incorporate their own local resources.

  1. Development of a QSAR Model for Thyroperoxidase Inhbition ...

    EPA Pesticide Factsheets

    hyroid hormones (THs) are involved in multiple biological processes and are critical modulators of fetal development. Even moderate changes in maternal or fetal TH levels can produce irreversible neurological deficits in children, such as lower IQ. The enzyme thyroperoxidase (TPO) plays a key role in the synthesis of THs, and inhibition of TPO by xenobiotics results in decreased TH synthesis. Recently, a high-throughput screening assay for TPO inhibition (AUR-TPO) was developed and used to test the ToxCast Phase I and II chemicals. In the present study, we used the results from AUR-TPO to develop a Quantitative Structure-Activity Relationship (QSAR) model for TPO inhibition. The training set consisted of 898 discrete organic chemicals: 134 inhibitors and 764 non-inhibitors. A five times two-fold cross-validation of the model was performed, yielding a balanced accuracy of 78.7%. More recently, an additional ~800 chemicals were tested in the AUR-TPO assay. These data were used for a blinded external validation of the QSAR model, demonstrating a balanced accuracy of 85.7%. Overall, the cross- and external validation indicate a robust model with high predictive performance. Next, we used the QSAR model to predict 72,526 REACH pre-registered substances. The model could predict 49.5% (35,925) of the substances in its applicability domain and of these, 8,863 (24.7%) were predicted to be TPO inhibitors. Predictions from this screening can be used in a tiered approach to

  2. Quantitative structure activity relationship (QSAR) of piperine analogs for bacterial NorA efflux pump inhibitors.

    PubMed

    Nargotra, Amit; Sharma, Sujata; Koul, Jawahir Lal; Sangwan, Pyare Lal; Khan, Inshad Ali; Kumar, Ashwani; Taneja, Subhash Chander; Koul, Surrinder

    2009-10-01

    Quantitative structure activity relationship (QSAR) analysis of piperine analogs as inhibitors of efflux pump NorA from Staphylococcus aureus has been performed in order to obtain a highly accurate model enabling prediction of inhibition of S. aureus NorA of new chemical entities from natural sources as well as synthetic ones. Algorithm based on genetic function approximation method of variable selection in Cerius2 was used to generate the model. Among several types of descriptors viz., topological, spatial, thermodynamic, information content and E-state indices that were considered in generating the QSAR model, three descriptors such as partial negative surface area of the compounds, area of the molecular shadow in the XZ plane and heat of formation of the molecules resulted in a statistically significant model with r(2)=0.962 and cross-validation parameter q(2)=0.917. The validation of the QSAR models was done by cross-validation, leave-25%-out and external test set prediction. The theoretical approach indicates that the increase in the exposed partial negative surface area increases the inhibitory activity of the compound against NorA whereas the area of the molecular shadow in the XZ plane is inversely proportional to the inhibitory activity. This model also explains the relationship of the heat of formation of the compound with the inhibitory activity. The model is not only able to predict the activity of new compounds but also explains the important regions in the molecules in quantitative manner.

  3. Development of a five-year mortality model in systemic sclerosis patients by different analytical approaches.

    PubMed

    Beretta, Lorenzo; Santaniello, Alessandro; Cappiello, Francesca; Chawla, Nitesh V; Vonk, Madelon C; Carreira, Patricia E; Allanore, Yannick; Popa-Diaconu, D A; Cossu, Marta; Bertolotti, Francesca; Ferraccioli, Gianfranco; Mazzone, Antonino; Scorza, Raffaella

    2010-01-01

    Systemic sclerosis (SSc) is a multiorgan disease with high mortality rates. Several clinical features have been associated with poor survival in different populations of SSc patients, but no clear and reproducible prognostic model to assess individual survival prediction in scleroderma patients has ever been developed. We used Cox regression and three data mining-based classifiers (Naïve Bayes Classifier [NBC], Random Forests [RND-F] and logistic regression [Log-Reg]) to develop a robust and reproducible 5-year prognostic model. All the models were built and internally validated by means of 5-fold cross-validation on a population of 558 Italian SSc patients. Their predictive ability and capability of generalisation was then tested on an independent population of 356 patients recruited from 5 external centres and finally compared to the predictions made by two SSc domain experts on the same population. The NBC outperformed the Cox-based classifier and the other data mining algorithms after internal cross-validation (area under receiving operator characteristic curve, AUROC: NBC=0.759; RND-F=0.736; Log-Reg=0.754 and Cox= 0.724). The NBC had also a remarkable and better trade-off between sensitivity and specificity (e.g. Balanced accuracy, BA) than the Cox-based classifier, when tested on an independent population of SSc patients (BA: NBC=0.769, Cox=0.622). The NBC was also superior to domain experts in predicting 5-year survival in this population (AUROC=0.829 vs. AUROC=0.788 and BA=0.769 vs. BA=0.67). We provide a model to make consistent 5-year prognostic predictions in SSc patients. Its internal validity, as well as capability of generalisation and reduced uncertainty compared to human experts support its use at bedside. Available at: http://www.nd.edu/~nchawla/survival.xls.

  4. The development and validation of a clinical prediction model to determine the probability of MODY in patients with young-onset diabetes.

    PubMed

    Shields, B M; McDonald, T J; Ellard, S; Campbell, M J; Hyde, C; Hattersley, A T

    2012-05-01

    Diagnosing MODY is difficult. To date, selection for molecular genetic testing for MODY has used discrete cut-offs of limited clinical characteristics with varying sensitivity and specificity. We aimed to use multiple, weighted, clinical criteria to determine an individual's probability of having MODY, as a crucial tool for rational genetic testing. We developed prediction models using logistic regression on data from 1,191 patients with MODY (n = 594), type 1 diabetes (n = 278) and type 2 diabetes (n = 319). Model performance was assessed by receiver operating characteristic (ROC) curves, cross-validation and validation in a further 350 patients. The models defined an overall probability of MODY using a weighted combination of the most discriminative characteristics. For MODY, compared with type 1 diabetes, these were: lower HbA(1c), parent with diabetes, female sex and older age at diagnosis. MODY was discriminated from type 2 diabetes by: lower BMI, younger age at diagnosis, female sex, lower HbA(1c), parent with diabetes, and not being treated with oral hypoglycaemic agents or insulin. Both models showed excellent discrimination (c-statistic = 0.95 and 0.98, respectively), low rates of cross-validated misclassification (9.2% and 5.3%), and good performance on the external test dataset (c-statistic = 0.95 and 0.94). Using the optimal cut-offs, the probability models improved the sensitivity (91% vs 72%) and specificity (94% vs 91%) for identifying MODY compared with standard criteria of diagnosis <25 years and an affected parent. The models are now available online at www.diabetesgenes.org . We have developed clinical prediction models that calculate an individual's probability of having MODY. This allows an improved and more rational approach to determine who should have molecular genetic testing.

  5. Pharmacological Validation of Candidate Causal Sleep Genes Identified in an N2 Cross

    PubMed Central

    Brunner, Joseph I.; Gotter, Anthony L.; Millstein, Joshua; Garson, Susan; Binns, Jacquelyn; Fox, Steven V.; Savitz, Alan T.; Yang, He S.; Fitzpatrick, Karrie; Zhou, Lili; Owens, Joseph R.; Webber, Andrea L.; Vitaterna, Martha H.; Kasarskis, Andrew; Uebele, Victor N.; Turek, Fred; Renger, John J.; Winrow, Christopher J.

    2013-01-01

    Despite the substantial impact of sleep disturbances on human health and the many years of study dedicated to understanding sleep pathologies, the underlying genetic mechanisms that govern sleep and wake largely remain unknown. Recently, we completed large scale genetic and gene expression analyses in a segregating inbred mouse cross and identified candidate causal genes that regulate the mammalian sleep-wake cycle, across multiple traits including total sleep time, amounts of REM, non-REM, sleep bout duration and sleep fragmentation. Here we describe a novel approach toward validating candidate causal genes, while also identifying potential targets for sleep-related indications. Select small molecule antagonists and agonists were used to interrogate candidate causal gene function in rodent sleep polysomnography assays to determine impact on overall sleep architecture and to evaluate alignment with associated sleep-wake traits. Significant effects on sleep architecture were observed in validation studies using compounds targeting the muscarinic acetylcholine receptor M3 subunit (Chrm3)(wake promotion), nicotinic acetylcholine receptor alpha4 subunit (Chrna4)(wake promotion), dopamine receptor D5 subunit (Drd5)(sleep induction), serotonin 1D receptor (Htr1d)(altered REM fragmentation), glucagon-like peptide-1 receptor (Glp1r)(light sleep promotion and reduction of deep sleep), and Calcium channel, voltage-dependent, T type, alpha 1I subunit (Cacna1i)(increased bout duration slow wave sleep). Taken together, these results show the complexity of genetic components that regulate sleep-wake traits and highlight the importance of evaluating this complex behavior at a systems level. Pharmacological validation of genetically identified putative targets provides a rapid alternative to generating knock out or transgenic animal models, and may ultimately lead towards new therapeutic opportunities. PMID:22091728

  6. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

    PubMed

    Blagus, Rok; Lusa, Lara

    2015-11-04

    Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

  8. [Practical aspects for minimizing errors in the cross-cultural adaptation and validation of quality of life questionnaires].

    PubMed

    Lauffer, A; Solé, L; Bernstein, S; Lopes, M H; Francisconi, C F

    2013-01-01

    The development and validation of questionnaires for evaluating quality of life (QoL) has become an important area of research. However, there is a proliferation of non-validated measuring instruments in the health setting that do not contribute to advances in scientific knowledge. To present, through the analysis of available validated questionnaires, a checklist of the practical aspects of how to carry out the cross-cultural adaptation of QoL questionnaires (generic, or disease-specific) so that no step is overlooked in the evaluation process, and thus help prevent the elaboration of insufficient or incomplete validations. We have consulted basic textbooks and Pubmed databases using the following keywords quality of life, questionnaires, and gastroenterology, confined to «validation studies» in English, Spanish, and Portuguese, and with no time limit, for the purpose of analyzing the translation and validation of the questionnaires available through the Mapi Institute and PROQOLID websites. A checklist is presented to aid in the planning and carrying out of the cross-cultural adaptation of QoL questionnaires, in conjunction with a glossary of key terms in the area of knowledge. The acronym DSTAC was used, which refers to each of the 5 stages involved in the recommended procedure. In addition, we provide a table of the QoL instruments that have been validated into Spanish. This article provides information on how to adapt QoL questionnaires from a cross-cultural perspective, as well as to minimize common errors. Copyright © 2012 Asociación Mexicana de Gastroenterología. Published by Masson Doyma México S.A. All rights reserved.

  9. Predicting risk of substantial weight gain in German adults-a multi-center cohort approach.

    PubMed

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf; Steffen, Annika

    2017-08-01

    A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score's discriminatory accuracy. The cross-validated c index (95% CI) was 0.71 (0.67-0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association.

  10. Predicting risk of substantial weight gain in German adults—a multi-center cohort approach

    PubMed Central

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf

    2017-01-01

    Abstract Background A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. Methods We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score’s discriminatory accuracy. Results The cross-validated c index (95% CI) was 0.71 (0.67–0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. Conclusions The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. PMID:28013243

  11. Modeling spanwise nonuniformity in the cross-sectional analysis of composite beams

    NASA Astrophysics Data System (ADS)

    Ho, Jimmy Cheng-Chung

    Spanwise nonuniformity effects are modeled in the cross-sectional analysis of beam theory. This modeling adheres to an established numerical framework on cross-sectional analysis of uniform beams with arbitrary cross-sections. This framework is based on two concepts: decomposition of the rotation tensor and the variational-asymptotic method. Allowance of arbitrary materials and geometries in the cross-section is from discretization of the warping field by finite elements. By this approach, dimensional reduction from three-dimensional elasticity is performed rigorously and the sectional strain energy is derived to be asymptotically-correct. Elastic stiffness matrices are derived for inputs into the global beam analysis. Recovery relations for the displacement, stress, and strain fields are also derived with care to be consistent with the energy. Spanwise nonuniformity effects appear in the form of pointwise and sectionwise derivatives, which are approximated by finite differences. The formulation also accounts for the effects of spanwise variations in initial twist and/or curvature. A linearly tapered isotropic strip is analyzed to demonstrate spanwise nonuniformity effects on the cross-sectional analysis. The analysis is performed analytically by the variational-asymptotic method. Results from beam theory are validated against solutions from plane stress elasticity. These results demonstrate that spanwise nonuniformity effects become significant as the rate at which the cross-sections vary increases. The modeling of transverse shear modes of deformation is accomplished by transforming the strain energy into generalized Timoshenko form. Approximations in this transformation procedure from previous works, when applied to uniform beams, are identified. The approximations are not used in the present work so as to retain more accuracy. Comparison of present results with those previously published shows that these approximations sometimes change the results measurably and thus are inappropriate. Static and dynamic results, from the global beam analysis, are calculated to show the differences between using stiffness constants from previous works and the present work. As a form of validation of the transformation procedure, calculations from the global beam analysis of initially twisted isotropic beams from using curvilinear coordinate axes featuring twist are shown to be equivalent to calculations using Cartesian coordinates.

  12. Cross-cultural adaptation and validation of the neonatal/infant Braden Q risk assessment scale.

    PubMed

    de Lima, Edson Luiz; de Brito, Maria José Azevedo; de Souza, Diba Maria Sebba Tosta; Salomé, Geraldo Magela; Ferreira, Lydia Masako

    2016-02-01

    To translate into Brazilian Portuguese and cross-culturally adapt the Neonatal/Infant Braden Q Risk Assessment Scale (Neonatal/Infant Braden Q Scale), and test the psychometric properties, reproducibility and validity of the instrument. There is a lack of studies on the development of pressure ulcers in children, especially in neonates. Thirty professionals participated in the cross-cultural adaptation of the Brazilian-Portuguese version of the scale. Fifty neonates of both sexes were assessed between July 2013 and June 2014. Reliability and reproducibility were tested in 20 neonates and construct validity was measured by correlating the Neonatal/Infant Braden Q Scale with the Braden Q Risk Assessment Scale (Braden Q Scale). Discriminant validity was assessed by comparing the scores of neonates with and without ulcers. The scale showed inter-rater reliability (ICC = 0.98; P < 0.001) and intra-rater reliability (ICC = 0.79; P < 0.001). A strong correlation was found between the Neonatal/Infant Braden Q Scale and Braden Q Scale (r = 0.96; P < 0.001). The cross-culturally adapted Brazilian version of the Neonatal/Infant Braden Q Scale is a reliable instrument, showing face, content and construct validity. Copyright © 2015 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved.

  13. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  14. A Systematic Approach to Predicting Spring Force for Sagittal Craniosynostosis Surgery.

    PubMed

    Zhang, Guangming; Tan, Hua; Qian, Xiaohua; Zhang, Jian; Li, King; David, Lisa R; Zhou, Xiaobo

    2016-05-01

    Spring-assisted surgery (SAS) can effectively treat scaphocephaly by reshaping crania with the appropriate spring force. However, it is difficult to accurately estimate spring force without considering biomechanical properties of tissues. This study presents and validates a reliable system to accurately predict the spring force for sagittal craniosynostosis surgery. The authors randomly chose 23 patients who underwent SAS and had been followed for at least 2 years. An elastic model was designed to characterize the biomechanical behavior of calvarial bone tissue for each individual. After simulating the contact force on accurate position of the skull strip with the springs, the finite element method was applied to calculating the stress of each tissue node based on the elastic model. A support vector regression approach was then used to model the relationships between biomechanical properties generated from spring force, bone thickness, and the change of cephalic index after surgery. Therefore, for a new patient, the optimal spring force can be predicted based on the learned model with virtual spring simulation and dynamic programming approach prior to SAS. Leave-one-out cross-validation was implemented to assess the accuracy of our prediction. As a result, the mean prediction accuracy of this model was 93.35%, demonstrating the great potential of this model as a useful adjunct for preoperative planning tool.

  15. Cross-cultural validity of the ABILOCO questionnaire for individuals with stroke, based on Rasch analysis.

    PubMed

    Avelino, Patrick Roberto; Magalhães, Lívia Castro; Faria-Fortini, Iza; Basílio, Marluce Lopes; Menezes, Kênia Kiefer Parreiras; Teixeira-Salmela, Luci Fuscaldi

    2018-06-01

    The purpose of this study was to evaluate the cross-cultural validity of the Brazilian version of the ABILOCO questionnaire for stroke subjects. Cross-cultural adaptation of the original English version of the ABILOCO to the Brazilian-Portuguese language followed standardized procedures. The adapted version was administered to 136 stroke subjects and its measurement properties were assessed using Rash analysis. Cross-cultural validity was based on cultural invariance analyses. Goodness-of-fit analysis revealed one misfitting item. The principal component analysis of the residuals showed that the first dimension explained 45% of the variance in locomotion ability; however, the eigenvalue was 1.92. The ABILOCO-Brazil divided the sample into two levels of ability and the items into about seven levels of difficulty. The item-person map showed some ceiling effect. Cultural invariance analyses revealed that although there were differences in the item calibrations between the ABILOCO-original and ABILOCO-Brazil, they did not impact the measures of locomotion ability. The ABILOCO-Brazil demonstrated satisfactory measurement properties to be used within both clinical and research contexts in Brazil, as well cross-cultural validity to be used in international/multicentric studies. However, the presence of ceiling effect suggests that it may not be appropriate for the assessment of individuals with high levels of locomotion ability. Implications for rehabilitation Self-report measures of locomotion ability are clinically important, since they describe the abilities of the individuals within real life contexts. The ABILOCO questionnaire, specific for stroke survivors, demonstrated satisfactory measurement properties, but may not be most appropriate to assess individuals with high levels of locomotion ability The results of the cross-cultural validity showed that the ABILOCO-Original and the ABILOCO-Brazil calibrations may be used interchangeable.

  16. Chronic subdural hematoma: Surgical management and outcome in 986 cases: A classification and regression tree approach

    PubMed Central

    Rovlias, Aristedis; Theodoropoulos, Spyridon; Papoutsakis, Dimitrios

    2015-01-01

    Background: Chronic subdural hematoma (CSDH) is one of the most common clinical entities in daily neurosurgical practice which carries a most favorable prognosis. However, because of the advanced age and medical problems of patients, surgical therapy is frequently associated with various complications. This study evaluated the clinical features, radiological findings, and neurological outcome in a large series of patients with CSDH. Methods: A classification and regression tree (CART) technique was employed in the analysis of data from 986 patients who were operated at Asclepeion General Hospital of Athens from January 1986 to December 2011. Burr holes evacuation with closed system drainage has been the operative technique of first choice at our institution for 29 consecutive years. A total of 27 prognostic factors were examined to predict the outcome at 3-month postoperatively. Results: Our results indicated that neurological status on admission was the best predictor of outcome. With regard to the other data, age, brain atrophy, thickness and density of hematoma, subdural accumulation of air, and antiplatelet and anticoagulant therapy were found to correlate significantly with prognosis. The overall cross-validated predictive accuracy of CART model was 85.34%, with a cross-validated relative error of 0.326. Conclusions: Methodologically, CART technique is quite different from the more commonly used methods, with the primary benefit of illustrating the important prognostic variables as related to outcome. Since, the ideal therapy for the treatment of CSDH is still under debate, this technique may prove useful in developing new therapeutic strategies and approaches for patients with CSDH. PMID:26257985

  17. Classification of Focal and Non Focal Epileptic Seizures Using Multi-Features and SVM Classifier.

    PubMed

    Sriraam, N; Raghu, S

    2017-09-02

    Identifying epileptogenic zones prior to surgery is an essential and crucial step in treating patients having pharmacoresistant focal epilepsy. Electroencephalogram (EEG) is a significant measurement benchmark to assess patients suffering from epilepsy. This paper investigates the application of multi-features derived from different domains to recognize the focal and non focal epileptic seizures obtained from pharmacoresistant focal epilepsy patients from Bern Barcelona database. From the dataset, five different classification tasks were formed. Total 26 features were extracted from focal and non focal EEG. Significant features were selected using Wilcoxon rank sum test by setting p-value (p < 0.05) and z-score (-1.96 > z > 1.96) at 95% significance interval. Hypothesis was made that the effect of removing outliers improves the classification accuracy. Turkey's range test was adopted for pruning outliers from feature set. Finally, 21 features were classified using optimized support vector machine (SVM) classifier with 10-fold cross validation. Bayesian optimization technique was adopted to minimize the cross-validation loss. From the simulation results, it was inferred that the highest sensitivity, specificity, and classification accuracy of 94.56%, 89.74%, and 92.15% achieved respectively and found to be better than the state-of-the-art approaches. Further, it was observed that the classification accuracy improved from 80.2% with outliers to 92.15% without outliers. The classifier performance metrics ensures the suitability of the proposed multi-features with optimized SVM classifier. It can be concluded that the proposed approach can be applied for recognition of focal EEG signals to localize epileptogenic zones.

  18. Four not six: Revealing culturally common facial expressions of emotion.

    PubMed

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. The Causal Meaning of Genomic Predictors and How It Affects Construction and Comparison of Genome-Enabled Selection Models

    PubMed Central

    Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.

    2015-01-01

    The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318

  20. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  1. Translation, Cross-cultural Adaptation and Psychometric Validation of the Korean-Language Cardiac Rehabilitation Barriers Scale (CRBS-K).

    PubMed

    Baek, Sora; Park, Hee-Won; Lee, Yookyung; Grace, Sherry L; Kim, Won-Seok

    2017-10-01

    To perform a translation and cross-cultural adaptation of the Cardiac Rehabilitation Barriers Scale (CRBS) for use in Korea, followed by psychometric validation. The CRBS was developed to assess patients' perception of the degree to which patient, provider and health system-level barriers affect their cardiac rehabilitation (CR) participation. The CRBS consists of 21 items (barriers to adherence) rated on a 5-point Likert scale. The first phase was to translate and cross-culturally adapt the CRBS to the Korean language. After back-translation, both versions were reviewed by a committee. The face validity was assessed in a sample of Korean patients (n=53) with history of acute myocardial infarction that did not participate in CR through semi-structured interviews. The second phase was to assess the construct and criterion validity of the Korean translation as well as internal reliability, through administration of the translated version in 104 patients, principle component analysis with varimax rotation and cross-referencing against CR use, respectively. The length, readability, and clarity of the questionnaire were rated well, demonstrating face validity. Analysis revealed a six-factor solution, demonstrating construct validity. Cronbach's alpha was greater than 0.65. Barriers rated highest included not knowing about CR and not being contacted by a program. The mean CRBS score was significantly higher among non-attendees (2.71±0.26) than CR attendees (2.51±0.18) (p<0.01). The Korean version of CRBS has demonstrated face, content and criterion validity, suggesting it may be useful for assessing barriers to CR utilization in Korea.

  2. PCA as a practical indicator of OPLS-DA model reliability.

    PubMed

    Worley, Bradley; Powers, Robert

    Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.

  3. Health Service Quality Scale: Brazilian Portuguese translation, reliability and validity.

    PubMed

    Rocha, Luiz Roberto Martins; Veiga, Daniela Francescato; e Oliveira, Paulo Rocha; Song, Elaine Horibe; Ferreira, Lydia Masako

    2013-01-17

    The Health Service Quality Scale is a multidimensional hierarchical scale that is based on interdisciplinary approach. This instrument was specifically created for measuring health service quality based on marketing and health care concepts. The aim of this study was to translate and culturally adapt the Health Service Quality Scale into Brazilian Portuguese and to assess the validity and reliability of the Brazilian Portuguese version of the instrument. We conducted a cross-sectional, observational study, with public health system patients in a Brazilian university hospital. Validity was assessed using Pearson's correlation coefficient to measure the strength of the association between the Brazilian Portuguese version of the instrument and the SERVQUAL scale. Internal consistency was evaluated using Cronbach's alpha coefficient; the intraclass (ICC) and Pearson's correlation coefficients were used for test-retest reliability. One hundred and sixteen consecutive postoperative patients completed the questionnaire. Pearson's correlation coefficient for validity was 0.20. Cronbach's alpha for the first and second administrations of the final version of the instrument were 0.982 and 0.986, respectively. For test-retest reliability, Pearson's correlation coefficient was 0.89 and ICC was 0.90. The culturally adapted, Brazilian Portuguese version of the Health Service Quality Scale is a valid and reliable instrument to measure health service quality.

  4. Data splitting for artificial neural networks using SOM-based stratified sampling.

    PubMed

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Adaptation and testing of instruments to measure cervical cancer screening factors among Vietnamese immigrant women.

    PubMed

    Nguyen-Truong, Connie K Y; Leo, Michael C; Lee-Lin, Frances; Gedaly-Duff, Vivian; Nail, Lillian M; Gregg, Jessica; Le, Tuong Vy; Tran, Tuyen

    2015-05-01

    Vietnamese American women diagnosed with cervical cancer are more likely to have advanced cancer than non-Hispanic White women. We sought to (a) develop a culturally sensitive Vietnamese translation of the Revised Susceptibility, Benefits, and Barriers Scale; Cultural Barriers to Screening Inventory; Confidentiality Issues Scale; and Quality of Care from the Health Care System Scale and (b) examine the psychometric properties. Cross-sectional study with 201 Vietnamese immigrant women from the Portland, Oregon, metropolitan area. We used a community-based participatory research approach and the U.S. Census Bureau's team approach to translation. Cronbach's alpha ranged from .57 to .91. The incremental fit index ranged from .83 to .88. The instruments demonstrated moderate to strong subscale internal consistency. Further research to assess structural validity is needed. Our approaches to translation and psychometric examination support use of the instruments in Vietnamese immigrant women. © The Author(s) 2014.

  6. SEE rate estimation based on diffusion approximation of charge collection

    NASA Astrophysics Data System (ADS)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  7. Computer-aided detection and quantification of endolymphatic hydrops within the mouse cochlea in vivo using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, George S.; Kim, Jinkyung; Applegate, Brian E.; Oghalai, John S.

    2017-07-01

    Diseases that cause hearing loss and/or vertigo in humans such as Meniere's disease are often studied using animal models. The volume of endolymph within the inner ear varies with these diseases. Here, we used a mouse model of increased endolymph volume, endolymphatic hydrops, to develop a computer-aided objective approach to measure endolymph volume from images collected in vivo using optical coherence tomography. The displacement of Reissner's membrane from its normal position was measured in cochlear cross sections. We validated our computer-aided measurements with manual measurements and with trained observer labels. This approach allows for computer-aided detection of endolymphatic hydrops in mice, with test performance showing sensitivity of 91% and specificity of 87% using a running average of five measurements. These findings indicate that this approach is accurate and reliable for classifying endolymphatic hydrops and quantifying endolymph volume.

  8. A combined ligand-based and target-based drug design approach for G-protein coupled receptors: application to salvinorin A, a selective kappa opioid receptor agonist

    NASA Astrophysics Data System (ADS)

    Singh, Nidhi; Chevé, Gwénaël; Ferguson, David M.; McCurdy, Christopher R.

    2006-08-01

    Combined ligand-based and target-based drug design approaches provide a synergistic advantage over either method individually. Therefore, we set out to develop a powerful virtual screening model to identify novel molecular scaffolds as potential leads for the human KOP (hKOP) receptor employing a combined approach. Utilizing a set of recently reported derivatives of salvinorin A, a structurally unique KOP receptor agonist, a pharmacophore model was developed that consisted of two hydrogen bond acceptor and three hydrophobic features. The model was cross-validated by randomizing the data using the CatScramble technique. Further validation was carried out using a test set that performed well in classifying active and inactive molecules correctly. Simultaneously, a bovine rhodopsin based "agonist-bound" hKOP receptor model was also generated. The model provided more accurate information about the putative binding site of salvinorin A based ligands. Several protein structure-checking programs were used to validate the model. In addition, this model was in agreement with the mutation experiments carried out on KOP receptor. The predictive ability of the model was evaluated by docking a set of known KOP receptor agonists into the active site of this model. The docked scores correlated reasonably well with experimental p K i values. It is hypothesized that the integration of these two independently generated models would enable a swift and reliable identification of new lead compounds that could reduce time and cost of hit finding within the drug discovery and development process, particularly in the case of GPCRs.

  9. Cross-cultural adaptation and validation of the VISA-A questionnaire for German-speaking achilles tendinopathy patients.

    PubMed

    Lohrer, Heinz; Nauck, Tanja

    2009-10-30

    Achilles tendinopathy is the predominant overuse injury in runners. To further investigate this overload injury in transverse and longitudinal studies a valid, responsive and reliable outcome measure is demanded. Most questionnaires have been developed for English-speaking populations. This is also true for the VISA-A score, so far representing the only valid, reliable, and disease specific questionnaire for Achilles tendinopathy. To internationally compare research results, to perform multinational studies or to exclude bias originating from subpopulations speaking different languages within one country an equivalent instrument is demanded in different languages. The aim of this study was therefore to cross-cultural adapt and validate the VISA-A questionnaire for German-speaking Achilles tendinopathy patients. According to the "guidelines for the process of cross-cultural adaptation of self-report measures" the VISA-A score was cross-culturally adapted into German (VISA-A-G) using six steps: Translation, synthesis, back translation, expert committee review, pretesting (n = 77), and appraisal of the adaptation process by an advisory committee determining the adequacy of the cross-cultural adaptation. The resulting VISA-A-G was then subjected to an analysis of reliability, validity, and internal consistency in 30 Achilles tendinopathy patients and 79 asymptomatic people. Concurrent validity was tested against a generic tendon grading system (Percy and Conochie) and against a classification system for the effect of pain on athletic performance (Curwin and Stanish). The "advisory committee" determined the VISA-A-G questionnaire as been translated "acceptable". The VISA-A-G questionnaire showed moderate to excellent test-retest reliability (ICC = 0.60 to 0.97). Concurrent validity showed good coherence when correlated with the grading system of Curwin and Stanish (rho = -0.95) and for the Percy and Conochie grade of severity (rho 0.95). Internal consistency (Cronbach's alpha) for the total VISA-A-G scores of the patients was calculated to be 0.737. The VISA-A questionnaire was successfully cross-cultural adapted and validated for use in German speaking populations. The psychometric properties of the VISA-A-G questionnaire are similar to those of the original English version. It therefore can be recommended as a sufficiently robust tool for future measuring clinical severity of Achilles tendinopathy in German speaking patients.

  10. Cross-cultural adaptation and validation of the VISA-A questionnaire for German-speaking Achilles tendinopathy patients

    PubMed Central

    Lohrer, Heinz; Nauck, Tanja

    2009-01-01

    Background Achilles tendinopathy is the predominant overuse injury in runners. To further investigate this overload injury in transverse and longitudinal studies a valid, responsive and reliable outcome measure is demanded. Most questionnaires have been developed for English-speaking populations. This is also true for the VISA-A score, so far representing the only valid, reliable, and disease specific questionnaire for Achilles tendinopathy. To internationally compare research results, to perform multinational studies or to exclude bias originating from subpopulations speaking different languages within one country an equivalent instrument is demanded in different languages. The aim of this study was therefore to cross-cultural adapt and validate the VISA-A questionnaire for German-speaking Achilles tendinopathy patients. Methods According to the "guidelines for the process of cross-cultural adaptation of self-report measures" the VISA-A score was cross-culturally adapted into German (VISA-A-G) using six steps: Translation, synthesis, back translation, expert committee review, pretesting (n = 77), and appraisal of the adaptation process by an advisory committee determining the adequacy of the cross-cultural adaptation. The resulting VISA-A-G was then subjected to an analysis of reliability, validity, and internal consistency in 30 Achilles tendinopathy patients and 79 asymptomatic people. Concurrent validity was tested against a generic tendon grading system (Percy and Conochie) and against a classification system for the effect of pain on athletic performance (Curwin and Stanish). Results The "advisory committee" determined the VISA-A-G questionnaire as been translated "acceptable". The VISA-A-G questionnaire showed moderate to excellent test-retest reliability (ICC = 0.60 to 0.97). Concurrent validity showed good coherence when correlated with the grading system of Curwin and Stanish (rho = -0.95) and for the Percy and Conochie grade of severity (rho 0.95). Internal consistency (Cronbach's alpha) for the total VISA-A-G scores of the patients was calculated to be 0.737. Conclusion The VISA-A questionnaire was successfully cross-cultural adapted and validated for use in German speaking populations. The psychometric properties of the VISA-A-G questionnaire are similar to those of the original English version. It therefore can be recommended as a sufficiently robust tool for future measuring clinical severity of Achilles tendinopathy in German speaking patients. PMID:19878572

  11. Development of Islamic Spiritual Health Scale (ISHS).

    PubMed

    Khorashadizadeh, Fatemeh; Heydari, Abbas; Nabavi, Fatemeh Heshmati; Mazlom, Seyed Reza; Ebrahimi, Mahdi; Esmaili, Habibollah

    2017-03-01

    To develop and psychometrically assess spiritual health scale based on Islamic view in Iran. The cross-sectional study was conducted at Imam Ali and Quem hospitals in Mashhad and Imam Ali and Imam Reza hospitals in Bojnurd, Iran, from 2015 to 2016 In the first stage, an 81-item Likert-type scale was developed using a qualitative approach. The second stage comprised quantitative component. The scale's impact factor, content validity ratio, content validity index, face validity and exploratory factor analysis were calculated. Test-retest and internal consistency was used to examine the reliability of the instrument. Data analysis was done using SPSS 11. Of 81 items in the scale, those with impact factor above 1.5, content validity ratio above 0.62, and content validity index above 0.79 were considered valid and the rest were discarded, resulting in a 61-item scale. Exploratory factor analysis reduced the list of items to 30, which were divided into seven groups with a minimum eigen value of 1 for each factor. But according to scatter plot, attributes of the concept of spiritual health included love to creator, duty-based life, religious rationality, psychological balance, and attention to afterlife. Internal reliability of the scale was calculated by alpha Cronbach coefficient as 0.91. There was solid evidence of the strength factor structure and reliability of the Islamic Spiritual Health Scale which provides a unique way for spiritual health assessment of Muslims.

  12. Measuring Adolescent Social and Academic Self-Efficacy: Cross-Ethnic Validity of the SEQ-C

    ERIC Educational Resources Information Center

    Minter, Anthony; Pritzker, Suzanne

    2017-01-01

    Objective: This study examines the psychometric strength, including cross-ethnic validity, of two subscales of Muris' Self-Efficacy Questionnaire for Children: Academic Self-Efficacy (ASE) and Social Self-Efficacy (SSE). Methods: A large ethnically diverse sample of 3,358 early and late adolescents completed surveys including the ASE and SSE.…

  13. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  14. Cross-validation of generalised body composition equations with diverse young men and women: the Training Intervention and Genetics of Exercise Response (TIGER) Study

    USDA-ARS?s Scientific Manuscript database

    Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...

  15. Validation of annual growth rings in freshwater mussel shells using cross dating .Can

    Treesearch

    Andrew L. Rypel; Wendell R. Haag; Robert H. Findlay

    2009-01-01

    We examined the usefulness of dendrochronological cross-dating methods for studying long-term, interannual growth patterns in freshwater mussels, including validation of annual shell ring formation. Using 13 species from three rivers, we measured increment widths between putative annual rings on shell thin sections and then removed age-related variation by...

  16. Cross-Cultural Validation of Stages of Exercise Change Scale among Chinese College Students

    ERIC Educational Resources Information Center

    Keating, Xiaofen D.; Guan, Jianmin; Huang, Yong; Deng, Mingying; Wu, Yifeng; Qu, Shuhua

    2005-01-01

    The purpose of the study was to test the cross-cultural concurrent validity of the stages of exercise change scale (SECS) in Chinese college students. The original SECS was translated into Chinese (C-SECS). Students from four Chinese universities (N = 1843) participated in the study. The leisure-time exercise (LTE) questionnaire was used to…

  17. Short communication: Variations in major mineral contents of Mediterranean buffalo milk and application of Fourier-transform infrared spectroscopy for their prediction.

    PubMed

    Stocco, G; Cipolat-Gotet, C; Bonfatti, V; Schiavon, S; Bittante, G; Cecchinato, A

    2016-11-01

    The aims of this study were (1) to assess variability in the major mineral components of buffalo milk, (2) to estimate the effect of certain environmental sources of variation on the major minerals during lactation, and (3) to investigate the possibility of using Fourier-transform infrared (FTIR) spectroscopy as an indirect, noninvasive tool for routine prediction of the mineral content of buffalo milk. A total of 173 buffaloes reared in 5 herds were sampled once during the morning milking. Milk samples were analyzed for Ca, P, K, and Mg contents within 3h of sample collection using inductively coupled plasma optical emission spectrometry. A Milkoscan FT2 (Foss, Hillerød, Denmark) was used to acquire milk spectra over the spectral range from 5,000 to 900 wavenumber/cm. Prediction models were built using a partial least square approach, and cross-validation was used to assess the prediction accuracy of FTIR. Prediction models were validated using a 4-fold random cross-validation, thus dividing the calibration-test set in 4 folds, using one of them to check the results (prediction models) and the remaining 3 to develop the calibration models. Buffalo milk minerals averaged 162, 117, 86, and 14.4mg/dL of milk for Ca, P, K, and Mg, respectively. Herd and days in milk were the most important sources of variation in the traits investigated. Parity slightly affected only Ca content. Coefficients of determination of cross-validation between the FTIR-predicted and the measured values were 0.71, 0.70, and 0.72 for Ca, Mg, and P, respectively, whereas prediction accuracy was lower for K (0.55). Our findings reveal FTIR to be an unsuitable tool when milk mineral content needs to be predicted with high accuracy. Predictions may play a role as indicator traits in selective breeding (if the additive genetic correlation between FTIR predictions and measures of milk minerals is high enough) or in monitoring the milk of buffalo populations for dairy industry purposes. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Characterization Approaches to Place Invariant Sites on SI-Traceable Scales

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis

    2012-01-01

    The effort to understand the Earth's climate system requires a complete integration of remote sensing imager data across time and multiple countries. Such an integration necessarily requires ensuring inter-consistency between multiple sensors to create the data sets needed to understand the climate system. Past efforts at inter-consistency have forced agreement between two sensors using sources that are viewed by both sensors at nearly the same time, and thus tend to be near polar regions over snow and ice. The current work describes a method that would provide an absolute radiometric calibration of a sensor rather than an inter-consistency of a sensor relative to another. The approach also relies on defensible error budgets that eventually provides a cross comparison of sensors without systematic errors. The basis of the technique is a model-based, SI-traceable prediction of at-sensor radiance over selected sites. The predicted radiance would be valid for arbitrary view and illumination angles and for any date of interest that is dominated by clear-sky conditions. The effort effectively works to characterize the sites as sources with known top-of-atmosphere radiance allowing accurate intercomparison of sensor data that without the need for coincident views. Data from the Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), and Moderate Resolution Imaging Spectroradiometer (MODIS) are used to demonstrate the difficulties of cross calibration as applied to current sensors. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The radiance comparisons lead to significant differences created by the specific solar model used for each sensor. The paper also proposes methods to mitigate the largest error sources in future systems. The results from these historical intercomparisons provide the basis for a set of recommendations to ensure future SI-traceable cross calibration using future missions such as CLARREO and TRUTHS. The paper describes a proposed approach that relies on model-based, SI-traceable predictions of at-sensor radiance over selected sites. The predicted radiance would be valid for arbitrary view and illumination angles and for any date of interest that is dominated by clear-sky conditions. The basis of the method is highly accurate measurements of at-sensor radiance of sufficient quality to understand the spectral and BRDF characteristics of the site and sufficient historical data to develop an understanding of temporal effects from changing surface and atmospheric conditions.

  19. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    PubMed

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  20. LQTA-QSAR: a new 4D-QSAR methodology.

    PubMed

    Martins, João Paulo A; Barbosa, Euzébio G; Pasqualoto, Kerly F M; Ferreira, Márcia M C

    2009-06-01

    A novel 4D-QSAR approach which makes use of the molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package is presented in this study. This new methodology, named LQTA-QSAR (LQTA, Laboratório de Quimiometria Teórica e Aplicada), has a module (LQTAgrid) that calculates intermolecular interaction energies at each grid point considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. The comparison of the proposed methodology to other 4D-QSAR and CoMFA formalisms was performed using a set of forty-seven glycogen phosphorylase b inhibitors (data set 1) and a set of forty-four MAP p38 kinase inhibitors (data set 2). The QSAR models for both data sets were built using the ordered predictor selection (OPS) algorithm for variable selection. Model validation was carried out applying y-randomization and leave-N-out cross-validation in addition to the external validation. PLS models for data set 1 and 2 provided the following statistics: q(2) = 0.72, r(2) = 0.81 for 12 variables selected and 2 latent variables and q(2) = 0.82, r(2) = 0.90 for 10 variables selected and 5 latent variables, respectively. Visualization of the descriptors in 3D space was successfully interpreted from the chemical point of view, supporting the applicability of this new approach in rational drug design.

  1. BEaST: brain extraction based on nonlocal segmentation technique.

    PubMed

    Eskildsen, Simon F; Coupé, Pierrick; Fonov, Vladimir; Manjón, José V; Leung, Kelvin K; Guizard, Nicolas; Wassef, Shafik N; Østergaard, Lasse Riis; Collins, D Louis

    2012-02-01

    Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases. In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Can we really use available scales for child and adolescent psychopathology across cultures? A systematic review of cross-cultural measurement invariance data.

    PubMed

    Stevanovic, Dejan; Jafari, Peyman; Knez, Rajna; Franic, Tomislav; Atilola, Olayinka; Davidovic, Nikolina; Bagheri, Zahra; Lakic, Aneta

    2017-02-01

    In this systematic review, we assessed available evidence for cross-cultural measurement invariance of assessment scales for child and adolescent psychopathology as an indicator of cross-cultural validity. A literature search was conducted using the Medline, PsychInfo, Scopus, Web of Science, and Google Scholar databases. Cross-cultural measurement invariance data was available for 26 scales. Based on the aggregation of the evidence from the studies under review, none of the evaluated scales have strong evidence for cross-cultural validity and suitability for cross-cultural comparison. A few of the studies showed a moderate level of measurement invariance for some scales (such as the Fear Survey Schedule for Children-Revised, Multidimensional Anxiety Scale for Children, Revised Child Anxiety and Depression Scale, Revised Children's Manifest Anxiety Scale, Mood and Feelings Questionnaire, and Disruptive Behavior Rating Scale), which may make them suitable in cross-cultural comparative studies. The remainder of the scales either showed weak or outright lack of measurement invariance. This review showed only limited testing for measurement invariance across cultural groups of scales for pediatric psychopathology, with evidence of cross-cultural validity for only a few scales. This study also revealed a need to improve practices of statistical analysis reporting in testing measurement invariance. Implications for future research are discussed.

  3. An investigation on the determinants of carbon emissions for OECD countries: empirical evidence from panel models robust to heterogeneity and cross-sectional dependence.

    PubMed

    Dogan, Eyup; Seker, Fahri

    2016-07-01

    This empirical study analyzes the impacts of real income, energy consumption, financial development and trade openness on CO2 emissions for the OECD countries in the Environmental Kuznets Curve (EKC) model by using panel econometric approaches that consider issues of heterogeneity and cross-sectional dependence. Results from the Pesaran CD test, the Pesaran-Yamagata's homogeneity test, the CADF and the CIPS unit root tests, the LM bootstrap cointegration test, the DSUR estimator, and the Emirmahmutoglu-Kose Granger causality test indicate that (i) the panel time-series data are heterogeneous and cross-sectionally dependent; (ii) CO2 emissions, real income, the quadratic income, energy consumption, financial development and openness are integrated of order one; (iii) the analyzed data are cointegrated; (iv) the EKC hypothesis is validated for the OECD countries; (v) increases in openness and financial development mitigate the level of emissions whereas energy consumption contributes to carbon emissions; (vi) a variety of Granger causal relationship is detected among the analyzed variables; and (vii) empirical results and policy recommendations are accurate and efficient since panel econometric models used in this study account for heterogeneity and cross-sectional dependence in their estimation procedures.

  4. On the use temperature parameterized rate coefficients in the estimation of non-equilibrium reaction rates

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.; Chikhaoui, Aziz

    2006-06-01

    The present paper considers a detailed analysis of the nonequilibrium effects for a model reactive system with the Chapman-Eskog (CE) solution of the Boltzmann equation as well as an explicit time dependent solution. The elastic cross sections employed are a hard sphere cross section and the Maxwell molecule cross section. Reactive cross sections which model reactions with and without activation energy are used. A detailed comparison is carried out with these solutions of the Boltzmann equation and the approximation introduced by Cukrowski and coworkers [J. Chem. Phys. 97 (1992) 9086; Chem. Phys. 89 (1992) 159; Physica A 188 (1992) 344; Chem. Phys. Lett. A 297 (1998) 402; Physica A 275 (2000) 134; Chem. Phys. Lett. 341 (2001) 585; Acta Phys. Polonica B 334 (2003) 3607.] based on the temperature of the reactive particles. We show that the Cukrowski approximation has limited applicability for the large class of reactive systems studied in this paper. The explicit time dependent solutions of the Boltzmann equation demonstrate that the CE approach is valid only for very slow reactions for which the corrections to the equilibrium rate coefficient are very small.

  5. RRegrs: an R package for computer-aided model selection with multiple regression models.

    PubMed

    Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L

    2015-01-01

    Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.

  6. Electroencephalography as a clinical tool for diagnosing and monitoring attention deficit hyperactivity disorder: a cross-sectional study

    PubMed Central

    Helgadóttir, Halla; Gudmundsson, Ólafur Ó; Baldursson, Gísli; Magnússon, Páll; Blin, Nicolas; Brynjólfsdóttir, Berglind; Emilsdóttir, Ásdís; Gudmundsdóttir, Gudrún B; Lorange, Málfrídur; Newman, Paula K; Jóhannesson, Gísli H; Johnsen, Kristinn

    2015-01-01

    Objectives The aim of this study was to develop and test, for the first time, a multivariate diagnostic classifier of attention deficit hyperactivity disorder (ADHD) based on EEG coherence measures and chronological age. Setting The participants were recruited in two specialised centres and three schools in Reykjavik. Participants The data are from a large cross-sectional cohort of 310 patients with ADHD and 351 controls, covering an age range from 5.8 to 14 years. ADHD was diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders fourth edition (DSM-IV) criteria using the K-SADS-PL semistructured interview. Participants in the control group were reported to be free of any mental or developmental disorders by their parents and had a score of less than 1.5 SDs above the age-appropriate norm on the ADHD Rating Scale-IV. Other than moderate or severe intellectual disability, no additional exclusion criteria were applied in order that the cohort reflected the typical cross section of patients with ADHD. Results Diagnostic classifiers were developed using statistical pattern recognition for the entire age range and for specific age ranges and were tested using cross-validation and by application to a separate cohort of recordings not used in the development process. The age-specific classification approach was more accurate (76% accuracy in the independent test cohort; 81% cross-validation accuracy) than the age-independent version (76%; 73%). Chronological age was found to be an important classification feature. Conclusions The novel application of EEG-based classification methods presented here can offer significant benefit to the clinician by improving both the accuracy of initial diagnosis and ongoing monitoring of children and adolescents with ADHD. The most accurate possible diagnosis at a single point in time can be obtained by the age-specific classifiers, but the age-independent classifiers are also useful as they enable longitudinal monitoring of brain function. PMID:25596195

  7. Sino-Nasal Outcome Test-22: Translation, Cross-cultural Adaptation, and Validation in Hebrew-Speaking Patients.

    PubMed

    Shapira Galitz, Yael; Halperin, Doron; Bavnik, Yosef; Warman, Meir

    2016-05-01

    To perform the translation, cross-cultural adaptation, and validation of the Sino-Nasal Outcome Test-22 (SNOT-22) questionnaire to the Hebrew language. A single-center prospective cross-sectional study. Seventy-three chronic rhinosinusitis (CRS) patients and 73 patients without sinonasal disease filled the Hebrew version of the SNOT-22 questionnaire. Fifty-one CRS patients underwent endoscopic sinus surgery, out of which 28 filled a postoperative questionnaire. Seventy-three healthy volunteers without sinonasal disease also answered the questionnaire. Internal consistency, test-retest reproducibility, validity, and responsiveness of the questionnaire were evaluated. Questionnaire reliability was excellent, with a high internal consistency (Cronbach's alpha coefficient, 0.91-0.936) and test-retest reproducibility (Spearman's coefficient, 0.962). Mean scores for the preoperative, postoperative, and control groups were 50.44, 29.64, and 13.15, respectively (P < .0001 for CRS vs controls, P < .001 for preoperative vs postoperative), showing validity and responsiveness of the questionnaire. The Hebrew version of SNOT-22 questionnaire is a valid outcome measure for patients with CRS with or without nasal polyps. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  8. Towards a model-based patient selection strategy for proton therapy: External validation of photon-derived Normal Tissue Complication Probability models in a head and neck proton therapy cohort

    PubMed Central

    Blanchard, P; Wong, AJ; Gunn, GB; Garden, AS; Mohamed, ASR; Rosenthal, DI; Crutison, J; Wu, R; Zhang, X; Zhu, XR; Mohan, R; Amin, MV; Fuller, CD; Frank, SJ

    2017-01-01

    Objective To externally validate head and neck cancer (HNC) photon-derived normal tissue complication probability (NTCP) models in patients treated with proton beam therapy (PBT). Methods This prospective cohort consisted of HNC patients treated with PBT at a single institution. NTCP models were selected based on the availability of data for validation and evaluated using the leave-one-out cross-validated area under the curve (AUC) for the receiver operating characteristics curve. Results 192 patients were included. The most prevalent tumor site was oropharynx (n=86, 45%), followed by sinonasal (n=28), nasopharyngeal (n=27) or parotid (n=27) tumors. Apart from the prediction of acute mucositis (reduction of AUC of 0.17), the models overall performed well. The validation (PBT) AUC and the published AUC were respectively 0.90 versus 0.88 for feeding tube 6 months post-PBT; 0.70 versus 0.80 for physician rated dysphagia 6 months post-PBT; 0.70 versus 0.80 for dry mouth 6 months post-PBT; and 0.73 versus 0.85 for hypothyroidism 12 months post-PBT. Conclusion While the drop in NTCP model performance was expected in PBT patients, the models showed robustness and remained valid. Further work is warranted, but these results support the validity of the model-based approach for treatment selection for HNC patients. PMID:27641784

  9. Experimental results in autonomous landing approaches by dynamic machine vision

    NASA Astrophysics Data System (ADS)

    Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.

    1994-07-01

    The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.

  10. Analysis of flexural wave cloaks

    NASA Astrophysics Data System (ADS)

    Climente, Alfonso; Torrent, Daniel; Sánchez-Dehesa, José

    2016-12-01

    This work presents a comprehensive study of the cloak for bending waves theoretically proposed by Farhat et al. [see Phys. Rev. Lett. 103, 024301 (2009)] and later on experimentally realized by Stenger et al. [see Phys. Rev. Lett. 108, 014301 (2012)]. This study uses a semi-analytical approach, the multilayer scattering method, which is based in the Kirchoff-Love wave equation for flexural waves in thin plates. Our approach was unable to reproduce the predicted behavior of the theoretically proposed cloak. This disagreement is here explained in terms of the simplified wave equation employed in the cloak design, which employed unusual boundary conditions for the cloaking shell. However, our approach reproduces fairly well the measured displacement maps for the fabricated cloak, indicating the validity of our approach. Also, the cloak quality has been here analyzed using the so called averaged visibility and the scattering cross section. The results obtained from both analysis let us to conclude that there is room for further improvements of this type of flexural wave cloak by using better design procedures.

  11. Simulation models in population breast cancer screening: A systematic review.

    PubMed

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Quantification of landfill methane using modified Intergovernmental Panel on Climate Change's waste model and error function analysis.

    PubMed

    Govindan, Siva Shangari; Agamuthu, P

    2014-10-01

    Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills. © The Author(s) 2014.

  13. The HEPEX Seasonal Streamflow Forecast Intercomparison Project

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Schepen, A.; Bennett, J.; Mendoza, P. A.; Ramos, M. H.; Wetterhall, F.; Pechlivanidis, I.

    2016-12-01

    The Hydrologic Ensemble Prediction Experiment (HEPEX; www.hepex.org) has launched an international seasonal streamflow forecasting intercomparison project (SSFIP) with the goal of broadening community knowledge about the strengths and weaknesses of various operational approaches being developed around the world. While some of these approaches have existed for decades (e.g. Ensemble Streamflow Prediction - ESP - in the United States and elsewhere), recent years have seen the proliferation of new operational and experimental streamflow forecasting approaches. These have largely been developed independently in each country, thus it is difficult to assess whether the approaches employed in some centers offer more promise for development than others. This motivates us to establish a forecasting testbed to facilitate a diagnostic evaluation of a range of different streamflow forecasting approaches and their components over a common set of catchments, using a common set of validation methods. Rather than prescribing a set of scientific questions from the outset, we are letting the hindcast results and notable differences in methodologies on a watershed-specific basis motivate more targeted analyses and sub-experiments that may provide useful insights. The initial pilot of the testbed involved two approaches - CSIRO's Bayesian joint probability (BJP) and NCAR's sequential regression - for two catchments, each designated by one of the teams (the Murray River, Australia, and Hungry Horse reservoir drainage area, USA). Additional catchments/approaches are in the process of being added to the testbed. To support this CSIRO and NCAR have developed data and analysis tools, data standards and protocols to formalize the experiment. These include requirements for cross-validation, verification, reference climatologies, and common predictands. This presentation describes the SSFIP experiments, pilot basin results and scientific findings to date.

  14. Validation and cross-cultural pilot testing of compliance with standard precautions scale: self-administered instrument for clinical nurses.

    PubMed

    Lam, Simon C

    2014-05-01

    To perform detailed psychometric testing of the compliance with standard precautions scale (CSPS) in measuring compliance with standard precautions of clinical nurses and to conduct cross-cultural pilot testing and assess the relevance of the CSPS on an international platform. A cross-sectional and correlational design with repeated measures. Nursing students from a local registered nurse training university, nurses from different hospitals in Hong Kong, and experts in an international conference. The psychometric properties of the CSPS were evaluated via internal consistency, 2-week and 3-month test-retest reliability, concurrent validation, and construct validation. The cross-cultural pilot testing and relevance check was examined by experts on infection control from various developed and developing regions. Among 453 participants, 193 were nursing students, 165 were enrolled nurses, and 95 were registered nurses. The results showed that the CSPS had satisfactory reliability (Cronbach α = 0.73; intraclass correlation coefficient, 0.79 for 2-week test-retest and 0.74 for 3-month test-retest) and validity (optimum correlation with criterion measure; r = 0.76, P < .001; satisfactory results on known-group method and hypothesis testing). A total of 19 experts from 16 countries assured that most of the CSPS findings were relevant and globally applicable. The CSPS demonstrated satisfactory results on the basis of the standard international criteria on psychometric testing, which ascertained the reliability and validity of this instrument in measuring the compliance of clinical nurses with standard precautions. The cross-cultural pilot testing further reinforced the instrument's relevance and applicability in most developed and developing regions.

  15. Validation of Yoon's Critical Thinking Disposition Instrument.

    PubMed

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  16. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Uranium Metal, Oxide, and Solution Systems on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell

    In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.

  17. A Cross-Cultural Test of Sex Bias in the Predictive Validity of Scholastic Aptitude Examinations: Some Israeli Findings.

    ERIC Educational Resources Information Center

    Zeidner, Moshe

    1987-01-01

    This study examined the cross-cultural validity of the sex bias contention with respect to standardized aptitude testing, used for academic prediction purposes in Israel. Analyses were based on the grade point average and scores of 1778 Jewish and 1017 Arab students who were administered standardized college entrance test batteries. (Author/LMO)

  18. Assessing Autistic Traits in a Taiwan Preschool Population: Cross-Cultural Validation of the Social Responsiveness Scale (SRS)

    ERIC Educational Resources Information Center

    Wang, Jessica; Lee, Li-Ching; Chen, Ying-Sheue; Hsu, Ju-Wei

    2012-01-01

    The cross-cultural validity of the Mandarin-adaptation of the social responsiveness scale (SRS) was examined in a sample of N = 307 participants in Taiwan, 140 typically developing and 167 with clinically-diagnosed developmental disorders. This scale is an autism assessment tool that provides a quantitative rather than categorical measure of…

  19. Validation of cross-sectional time series and multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents using doubly labeled water

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...

  20. A New Symptom Model for Autism Cross-Validated in an Independent Sample

    ERIC Educational Resources Information Center

    Boomsma, A.; Van Lang, N. D. J.; De Jonge, M. V.; De Bildt, A. A.; Van Engeland, H.; Minderaa, R. B.

    2008-01-01

    Background: Results from several studies indicated that a symptom model other than the DSM triad might better describe symptom domains of autism. The present study focused on a) investigating the stability of a new symptom model for autism by cross-validating it in an independent sample and b) examining the invariance of the model regarding three…

  1. Translation, cross-cultural adaptation and validation of an HIV/AIDS knowledge and attitudinal instrument.

    PubMed

    Zometa, Carlos S; Dedrick, Robert; Knox, Michael D; Westhoff, Wayne; Siri, Rodrigo Simán; Debaldo, Ann

    2007-06-01

    An instrument developed in the United States by the Centers for Disease Control and Prevention to assess HIV/AIDS knowledge and four attitudinal dimensions (Peer Pressure, Abstinence, Drug Use, and Threat of HIV Infection) and an instrument developed by Basen-Engquist et al. (1999) to measure abstinence and condom use were translated, cross-culturally adapted, and validated for use with Spanish-speaking high school students in El Salvador. A back-translation of the English version was cross-culturally adapted using two different review panels and pilot-tested with Salvadorian students. An expert panel established content validity, and confirmatory factor analysis provided support for construct validity. Results indicated that the methodology was successful in cross-culturally adapting the instrument developed by the Centers for Disease Control and Prevention and the instrument developed by Basen-Engquist et al. The psychometric properties of the knowledge section were acceptable and there was partial support for the four-factor attitudinal model underlying the CDC instrument and the two-factor model underlying the Basen-Engquist et al. instrument. Additional studies with Spanish-speaking populations (either in the United States or Latin America) are needed to evaluate the generalizability of the present results.

  2. A comprehensive approach to psychometric assessment of instruments used in dementia educational interventions for health professionals: a cross-sectional study.

    PubMed

    Wang, Yao; Xiao, Lily Dongxia; He, Guo-Ping

    2015-02-01

    Suboptimal care for people with dementia in hospital settings has been reported and is attributed to the lack of knowledge and inadequate attitudes in dementia care among health professionals. Educational interventions have been widely used to improve care outcomes; however, Chinese-language instruments used in dementia educational interventions for health professionals are lacking. The aims of this study were to select, translate and evaluate instruments used in dementia educational interventions for Chinese health professionals in acute-care hospitals. A cross-sectional study design was used. A modified stratified random sampling was used to recruit 442 participants from different levels of hospitals in Changsha, China. Dementia care competence was used as a framework for the selection and evaluation of Alzheimer's Disease Knowledge Scale and Dementia Care Attitudes Scale for health professionals in the study. These two scales were translated into Chinese using forward and back translation method. Content validity, test-retest reliability and internal consistency were assessed. Construct validity was tested using exploratory factor analysis. Known-group validity was established by comparing scores of Alzheimer's Disease Knowledge Scale and Dementia Care Attitudes Scale in two sub-groups. A person-centred care scale was utilised as a gold standard to establish concurrent validity of these two scales. Results demonstrated acceptable content validity, internal consistency, test-retest reliability and concurrent validity. Exploratory factor analysis presented a single-factor structure of the Chinese Alzheimer's Disease Knowledge Scale and a two-factor structure of the Chinese Dementia Care Attitudes Scale, supporting the conceptual dimensions of the original scales. The Chinese Alzheimer's Disease Knowledge Scale and Chinese Dementia Care Attitudes Scale demonstrated known-group validity evidenced by significantly higher scores identified from the sub-group with a longer work experience compared to those in the sub-group with less work experience. The use of dementia care competence as a framework to inform the selection and evaluation of instruments used in dementia educational interventions for health professionals has wide applicability in other areas. The results support that Chinese Alzheimer's Disease Knowledge Scale and Chinese Dementia Care Attitudes Scale are reliable and valid instruments for health professionals to use in acute-care settings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Analysis of corrections to the eikonal approximation

    NASA Astrophysics Data System (ADS)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  4. The size effects upon shock plastic compression of nanocrystals

    NASA Astrophysics Data System (ADS)

    Malygin, G. A.; Klyavin, O. V.

    2017-10-01

    For the first time a theoretical analysis of scale effects upon the shock plastic compression of nanocrystals is implemented in the context of a dislocation kinetic approach based on the equations and relationships of dislocation kinetics. The yield point of crystals τy is established as a quantitative function of their cross-section size D and the rate of shock deformation as τy ɛ2/3 D. This dependence is valid in the case of elastic stress relaxation on account of emission of dislocations from single-pole Frank-Read sources near the crystal surface.

  5. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  6. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  7. Quantitative structure-activity relationships by neural networks and inductive logic programming. II. The inhibition of dihydrofolate reductase by triazines

    NASA Astrophysics Data System (ADS)

    Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.

    1994-08-01

    One of the largest available data sets for developing a quantitative structure-activity relationship (QSAR) — the inhibition of dihydrofolate reductase (DHFR) by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazine derivatives — has been used for a sixfold cross-validation trial of neural networks, inductive logic programming (ILP) and linear regression. No statistically significant difference was found between the predictive capabilities of the methods. However, the representation of molecules by attributes, which is integral to the ILP approach, provides understandable rules about drug-receptor interactions.

  8. Three-dimensional quantitative structure-activity relationship study on antioxidant capacity of curcumin analogues

    NASA Astrophysics Data System (ADS)

    Chen, Bohong; Zhu, Zhibo; Chen, Min; Dong, Wenqi; Li, Zhen

    2014-03-01

    A comparative molecular similarity indices analysis (CoMSIA) was performed on a set of 27 curcumin-like diarylpentanoid analogues with the radical scavenging activities. A significant cross-validated correlation coefficient Q2 (0.784), SEP (0.042) for CoMSIA were obtained, indicating the statistical significance of the correlation. Further we adopt a rational approach toward the selection of substituents at various positions in our scaffold,and finally find the favored and disfavoured regions for the enhanced antioxidative activity. The results have been used as a guide to design compounds that, potentially, have better activity against oxidative damage.

  9. Change of electric dipole moment in charge transfer transitions of ferrocene oligomers studied by ultrafast two-photon absorption

    NASA Astrophysics Data System (ADS)

    Mikhaylov, Alexander; Arias, Eduardo; Moggio, Ivana; Ziolo, Ronald; Uudsemaa, Merle; Trummal, Aleksander; Cooper, Thomas; Rebane, Aleksander

    2017-02-01

    Change of permanent electric dipole moment in the lower-energy charge transfer transitions for a series of symmetrical and non-symmetrical ferrocene-phenyleneethynylene oligomers were studied by measuring the corresponding femtosecond two-photon absorption cross section spectra, and were determined to be in the range Δμ = 3 - 10 D. Quantum-chemical calculations of Δμ for the non-symmetrical oligomers show good quantitative agreement with the experimental results, thus validating two-photon absorption spectroscopy as a viable experimental approach to study electrostatic properties of organometallics and other charge transfer systems.

  10. Spanish translation, cross-cultural adaptation, and validation of the Questionnaire for Diabetes-Related Foot Disease (Q-DFD)

    PubMed Central

    Castillo-Tandazo, Wilson; Flores-Fortty, Adolfo; Feraud, Lourdes; Tettamanti, Daniel

    2013-01-01

    Purpose To translate, cross-culturally adapt, and validate the Questionnaire for Diabetes-Related Foot Disease (Q-DFD), originally created and validated in Australia, for its use in Spanish-speaking patients with diabetes mellitus. Patients and methods The translation and cross-cultural adaptation were based on international guidelines. The Spanish version of the survey was applied to a community-based (sample A) and a hospital clinic-based sample (samples B and C). Samples A and B were used to determine criterion and construct validity comparing the survey findings with clinical evaluation and medical records, respectively; while sample C was used to determine intra- and inter-rater reliability. Results After completing the rigorous translation process, only four items were considered problematic and required a new translation. In total, 127 patients were included in the validation study: 76 to determine criterion and construct validity and 41 to establish intra- and inter-rater reliability. For an overall diagnosis of diabetes-related foot disease, a substantial level of agreement was obtained when we compared the Q-DFD with the clinical assessment (kappa 0.77, sensitivity 80.4%, specificity 91.5%, positive likelihood ratio [LR+] 9.46, negative likelihood ratio [LR−] 0.21); while an almost perfect level of agreement was obtained when it was compared with medical records (kappa 0.88, sensitivity 87%, specificity 97%, LR+ 29.0, LR− 0.13). Survey reliability showed substantial levels of agreement, with kappa scores of 0.63 and 0.73 for intra- and inter-rater reliability, respectively. Conclusion The translated and cross-culturally adapted Q-DFD showed good psychometric properties (validity, reproducibility, and reliability) that allow its use in Spanish-speaking diabetic populations. PMID:24039434

  11. Psychometric validation of a condom self-efficacy scale in Korean.

    PubMed

    Cha, EunSeok; Kim, Kevin H; Burke, Lora E

    2008-01-01

    When an instrument is translated for use in cross-cultural research, it needs to account for cultural factors without distorting the psychometric properties of the instrument. To validate the psychometric properties of the condom self-efficacy scale (CSE) originally developed for American adolescents and young adults after translating the scale to Korean (CSE-K) to determine its suitability for cross-cultural research among Korean college students. A cross-sectional, correlational design was used with an exploratory survey methodology through self-report questionnaires. A convenience sample of 351 students, aged 18 to 25 years, were recruited at a university in Seoul, Korea. The participants completed the CSE-K and the intention of condom use scales after they were translated from English to Korean using a combined translation technique. A demographic and sex history questionnaire, which included an item to assess actual condom usage, was also administered. Mean, variance, reliability, criterion validity, and factorial validity using confirmatory factor analysis were assessed in the CSE-K. Norms for the CSE-K were similar, but not identical, to norms for the English version. The means of all three subscales were lower for the CSE-K than for the original CSE; however, the obtained variance in CSE-K was roughly similar with the original CSE. The Cronbach's alpha coefficient for the total scale was higher for the CSE-K (.91) than that for either the CSE (.85) or CSE in Thai (.85). Criterion validity and construct validity of the CSE-K were confirmed. The CSE-K was a reliable and valid scale in measuring condom self-efficacy among Korean college students. The findings suggest that the CSE was an appropriate instrument to conduct cross-cultural research on sexual behavior in adolescents and young adults.

  12. Translation, Cross-cultural Adaptation and Psychometric Validation of the Korean-Language Cardiac Rehabilitation Barriers Scale (CRBS-K)

    PubMed Central

    2017-01-01

    Objective To perform a translation and cross-cultural adaptation of the Cardiac Rehabilitation Barriers Scale (CRBS) for use in Korea, followed by psychometric validation. The CRBS was developed to assess patients' perception of the degree to which patient, provider and health system-level barriers affect their cardiac rehabilitation (CR) participation. Methods The CRBS consists of 21 items (barriers to adherence) rated on a 5-point Likert scale. The first phase was to translate and cross-culturally adapt the CRBS to the Korean language. After back-translation, both versions were reviewed by a committee. The face validity was assessed in a sample of Korean patients (n=53) with history of acute myocardial infarction that did not participate in CR through semi-structured interviews. The second phase was to assess the construct and criterion validity of the Korean translation as well as internal reliability, through administration of the translated version in 104 patients, principle component analysis with varimax rotation and cross-referencing against CR use, respectively. Results The length, readability, and clarity of the questionnaire were rated well, demonstrating face validity. Analysis revealed a six-factor solution, demonstrating construct validity. Cronbach's alpha was greater than 0.65. Barriers rated highest included not knowing about CR and not being contacted by a program. The mean CRBS score was significantly higher among non-attendees (2.71±0.26) than CR attendees (2.51±0.18) (p<0.01). Conclusion The Korean version of CRBS has demonstrated face, content and criterion validity, suggesting it may be useful for assessing barriers to CR utilization in Korea. PMID:29201826

  13. Cross-Cultural Applicability of the Montreal Cognitive Assessment (MoCA): A Systematic Review.

    PubMed

    O'Driscoll, Ciarán; Shaikh, Madiha

    2017-01-01

    The Montreal Cognitive Assessment (MoCA) is widely used to screen for mild cognitive impairment (MCI). While there are many available versions, the cross-cultural validity of the assessment has not been explored sufficiently. We aimed to interrogate the validity of the MoCA in a cross-cultural context: in differentiating MCI from normal controls (NC); and identifying cut-offs and adjustments for age and education where possible. This review sourced a wide range of studies including case-control studies. In addition, we report findings for differentiating dementias from NC and MCI from dementias, however, these were not considered to be an appropriate use of the MoCA. The subject of the review assumes heterogeneity and therefore meta-analyses was not conducted. Quality ratings, forest plots of validated studies (sensitivity and specificity) with covariates (suggested cut-offs, age, education and country), and summary receiver operating characteristic curve are presented. The results showed a wide range in suggested cutoffs for MCI cross-culturally, with variability in levels of sensitivity and specificity ranging from low to high. Poor methodological rigor appears to have affected reported accuracy and validity of the MoCA. The review highlights the necessity for cross-cultural considerations when using the MoCA, and recognizing it as a screen and not a diagnostic tool. Appropriate cutoffs and point adjustments for education are suggested.

  14. Cross-cultural validation of the revised temperament and character inventory in the Bulgarian language.

    PubMed

    Tilov, Boris; Dimitrova, Donka; Stoykova, Maria; Tornjova, Bianka; Foreva, Gergana; Stoyanov, Drozdstoj

    2012-12-01

    Health-care professions have long been considered prone to work-related stress, yet recent research in Bulgaria indicates alarmingly high levels of burnout. Cloninger's inventory is used to analyse and evaluate correlation between personality characteristics and degree of burnout syndrome manifestation among the risk categories of health-care professionals. The primary goal of this study was to test the conceptual validity and cross-cultural applicability of the revised TCI (TCI-R), developed in the United States, in a culturally, socially and economically diverse setting. Linguistic validation, test-retest studies, statistical and expert analyses were performed to assess cross-cultural applicability of the revised Cloninger's temperament and character inventory in Bulgarian, its reliability and internal consistency and construct validity. The overall internal consistency of TCI-R and its scales as well as the interscale and test-retest correlations prove that the translated version of the questionnaire is acceptable and cross-culturally applicable for the purposes of studying organizational stress and burnout risk in health-care professionals. In general the cross-cultural adaptation process, even if carried out in a rigorous way, does not always lead to the best target version and suggests it would be useful to develop new scales specific to each culture and, at the same time, to think about the trans-cultural adaptation. © 2012 Blackwell Publishing Ltd.

  15. College Students' Perceptions of Professor/Instructor Bullying: Questionnaire Development and Psychometric Properties.

    PubMed

    Marraccini, Marisa E; Weyandt, Lisa L; Rossi, Joseph S

    2015-01-01

    This study developed and examined the psychometric properties of a newly formed measure designed to assess professor/instructor bullying, as well as teacher bullying occurring prior to college. Additionally, prevalence of instructor bullying and characteristics related to victims of instructor bullying were examined. Participants were 337 college students recruited in 2012 from a northeastern university. An online questionnaire was administered to college students. A split-half, cross-validation approach was employed for measurement development. The measure demonstrated strong criterion validity and internal consistency. Approximately half of students reported witnessing professor/instructor bullying and 18% reported being bullied by a professor/instructor. Report of teacher bullying occurring prior to college was related to professor/instructor bullying in college, and sex was a moderating variable. College students perceive instructor bullying as occurring but may not know how to properly address it. Prevention efforts should be made by university administrators, faculty, and staff.

  16. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints.

  17. Initial Validation of Robotic Operations for In-Space Assembly of a Large Solar Electric Propulsion Transport Vehicle

    NASA Technical Reports Server (NTRS)

    Komendera, Erik E.; Dorsey, John T.

    2017-01-01

    Developing a capability for the assembly of large space structures has the potential to increase the capabilities and performance of future space missions and spacecraft while reducing their cost. One such application is a megawatt-class solar electric propulsion (SEP) tug, representing a critical transportation ability for the NASA lunar, Mars, and solar system exploration missions. A series of robotic assembly experiments were recently completed at Langley Research Center (LaRC) that demonstrate most of the assembly steps for the SEP tug concept. The assembly experiments used a core set of robotic capabilities: long-reach manipulation and dexterous manipulation. This paper describes cross-cutting capabilities and technologies for in-space assembly (ISA), applies the ISA approach to a SEP tug, describes the design and development of two assembly demonstration concepts, and summarizes results of two sets of assembly experiments that validate the SEP tug assembly steps.

  18. Target selection for a hypervelocity asteroid intercept vehicle flight validation mission

    NASA Astrophysics Data System (ADS)

    Wagner, Sam; Wie, Bong; Barbee, Brent W.

    2015-02-01

    Asteroids and comets have collided with the Earth in the past and will do so again in the future. Throughout Earth's history these collisions have played a significant role in shaping Earth's biological and geological histories. The planetary defense community has been examining a variety of options for mitigating the impact threat of asteroids and comets that approach or cross Earth's orbit, known as near-Earth objects (NEOs). This paper discusses the preliminary study results of selecting small (100-m class) NEO targets and mission analysis and design trade-offs for validating the effectiveness of a Hypervelocity Asteroid Intercept Vehicle (HAIV) concept, currently being investigated for a NIAC (NASA Advanced Innovative Concepts) Phase 2 study. In particular this paper will focus on the mission analysis and design for single spacecraft direct impact trajectories, as well as several mission types that enable a secondary rendezvous spacecraft to observe the HAIV impact and evaluate it's effectiveness.

  19. Measurement Scales of Suicidal Ideation and Attitudes: A Systematic Review Article

    PubMed Central

    Ghasemi, Parvin; Shaghaghi, Abdolreza; Allahverdipour, Hamid

    2015-01-01

    Background: The main aim of this study was to accumulate research evidence that introduce validated scales to measure suicidal attitudes and ideation and provide an empirical framework for adopting a relevant assessment tool in studies on suicide and suicidal behaviors. Methods: Medical Subject Headings’ (MeSH) terms were used to search Ovid Medline, PROQUEST, Wiley online library, Science Direct and PubMed for the published articles in English that reported application of an scale to measure suicidal attitudes and ideation from January 1974 onward. Results: Fourteen suicidal attitude scale and 15 scales for assessing suicidal ideation were identified in this systematic review. No gold standard approach was recognized to study suicide related attitudes and ideations. Conclusion: Special focus on generally agreed dimensions of suicidal ideation and attitudes and cross-cultural validation of the introduced scales to be applicable in different ethnic and socially diverse populations could be a promising area of research for scholars. PMID:26634193

  20. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

Top