Sample records for validation set generation

  1. Automatic Generation of Validated Specific Epitope Sets.

    PubMed

    Carrasco Pro, Sebastian; Sidney, John; Paul, Sinu; Lindestam Arlehamn, Cecilia; Weiskopf, Daniela; Peters, Bjoern; Sette, Alessandro

    2015-01-01

    Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB) and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue) were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  2. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  3. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  4. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  5. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  6. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  7. Situating Standard Setting within Argument-Based Validity

    ERIC Educational Resources Information Center

    Papageorgiou, Spiros; Tannenbaum, Richard J.

    2016-01-01

    Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…

  8. Validation of two (parametric vs non-parametric) daily weather generators

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Skalak, P.

    2015-12-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

  9. Parametric vs. non-parametric daily weather generator: validation and comparison

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin

    2016-04-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

  10. Generator Set Environmental and Stability Testing

    DTIC Science & Technology

    2015-03-01

    UNCLASSIFIED GENERATOR SET ENVIRONMENTAL AND STABILITY TESTING INTERIM REPORT TFLRF No. 460 by Gregory A. Hansen Edwin A...it to the originator. UNCLASSIFIED GENERATOR SET ENVIRONMENTAL AND STABILITY TESTING INTERIM REPORT TFLRF No. 460 by...TITLE AND SUBTITLE Generator Set Environmental and Stability Testing 5a. CONTRACT NUMBER W56HZV-09-C-0100 5b. GRANT NUMBER 5c. PROGRAM

  11. An Ethical Issue Scale for Community Pharmacy Setting (EISP): Development and Validation.

    PubMed

    Crnjanski, Tatjana; Krajnovic, Dusanka; Tadic, Ivana; Stojkov, Svetlana; Savic, Mirko

    2016-04-01

    Many problems that arise when providing pharmacy services may contain some ethical components and the aims of this study were to develop and validate a scale that could assess difficulties of ethical issues, as well as the frequency of those occurrences in everyday practice of community pharmacists. Development and validation of the scale was conducted in three phases: (1) generating items for the initial survey instrument after qualitative analysis; (2) defining the design and format of the instrument; (3) validation of the instrument. The constructed Ethical Issue scale for community pharmacy setting has two parts containing the same 16 items for assessing the difficulty and frequency thereof. The results of the 171 completely filled out scales were analyzed (response rate 74.89%). The Cronbach's α value of the part of the instrument that examines difficulties of the ethical situations was 0.83 and for the part of the instrument that examined frequency of the ethical situations was 0.84. Test-retest reliability for both parts of the instrument was satisfactory with all Interclass correlation coefficient (ICC) values above 0.6, (for the part that examines severity ICC = 0.809, for the part that examines frequency ICC = 0.929). The 16-item scale, as a self assessment tool, demonstrated a high degree of content, criterion, and construct validity and test-retest reliability. The results support its use as a research tool to asses difficulty and frequency of ethical issues in community pharmacy setting. The validated scale needs to be further employed on a larger sample of pharmacists.

  12. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  13. Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.

    PubMed

    Harrington, Peter de Boves

    2018-01-02

    Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.

  14. Using affective knowledge to generate and validate a set of emotion-related, action words.

    PubMed

    Portch, Emma; Havelka, Jelena; Brown, Charity; Giner-Sorolla, Roger

    2015-01-01

    Emotion concepts are built through situated experience. Abstract word meaning is grounded in this affective knowledge, giving words the potential to evoke emotional feelings and reactions (e.g., Vigliocco et al., 2009). In the present work we explore whether words differ in the extent to which they evoke 'specific' emotional knowledge. Using a categorical approach, in which an affective 'context' is created, it is possible to assess whether words proportionally activate knowledge relevant to different emotional states (e.g., 'sadness', 'anger', Stevenson, Mikels & James, 2007a). We argue that this method may be particularly effective when assessing the emotional meaning of action words (e.g., Schacht & Sommer, 2009). In study 1 we use a constrained feature generation task to derive a set of action words that participants associated with six, basic emotional states (see full list in Appendix S1). Generation frequencies were taken to indicate the likelihood that the word would evoke emotional knowledge relevant to the state to which it had been paired. In study 2 a rating task was used to assess the strength of association between the six most frequently generated, or 'typical', action words and corresponding emotion labels. Participants were presented with a series of sentences, in which action words (typical and atypical) and labels were paired e.g., "If you are feeling 'sad' how likely would you be to act in the following way?" … 'cry.' Findings suggest that typical associations were robust. Participants always gave higher ratings to typical vs. atypical action word and label pairings, even when (a) rating direction was manipulated (the label or verb appeared first in the sentence), and (b) the typical behaviours were to be performed by the rater themselves, or others. Our findings suggest that emotion-related action words vary in the extent to which they evoke knowledge relevant for different emotional states. When measuring affective grounding, it may then be

  15. Utility of NIST Whole-Genome Reference Materials for the Technical Validation of a Multigene Next-Generation Sequencing Test.

    PubMed

    Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J

    2017-07-01

    The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  16. Integration of SimSET photon history generator in GATE for efficient Monte Carlo simulations of pinhole SPECT.

    PubMed

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W

    2008-07-01

    The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.

  17. Automatic control system generation for robot design validation

    NASA Technical Reports Server (NTRS)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  18. Using affective knowledge to generate and validate a set of emotion-related, action words

    PubMed Central

    Havelka, Jelena; Brown, Charity; Giner-Sorolla, Roger

    2015-01-01

    Emotion concepts are built through situated experience. Abstract word meaning is grounded in this affective knowledge, giving words the potential to evoke emotional feelings and reactions (e.g., Vigliocco et al., 2009). In the present work we explore whether words differ in the extent to which they evoke ‘specific’ emotional knowledge. Using a categorical approach, in which an affective ‘context’ is created, it is possible to assess whether words proportionally activate knowledge relevant to different emotional states (e.g., ‘sadness’, ‘anger’, Stevenson, Mikels & James, 2007a). We argue that this method may be particularly effective when assessing the emotional meaning of action words (e.g., Schacht & Sommer, 2009). In study 1 we use a constrained feature generation task to derive a set of action words that participants associated with six, basic emotional states (see full list in Appendix S1). Generation frequencies were taken to indicate the likelihood that the word would evoke emotional knowledge relevant to the state to which it had been paired. In study 2 a rating task was used to assess the strength of association between the six most frequently generated, or ‘typical’, action words and corresponding emotion labels. Participants were presented with a series of sentences, in which action words (typical and atypical) and labels were paired e.g., “If you are feeling ‘sad’ how likely would you be to act in the following way?” … ‘cry.’ Findings suggest that typical associations were robust. Participants always gave higher ratings to typical vs. atypical action word and label pairings, even when (a) rating direction was manipulated (the label or verb appeared first in the sentence), and (b) the typical behaviours were to be performed by the rater themselves, or others. Our findings suggest that emotion-related action words vary in the extent to which they evoke knowledge relevant for different emotional states. When measuring

  19. Reliability and Validity of 10 Different Standard Setting Procedures.

    ERIC Educational Resources Information Center

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  20. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    PubMed

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  1. Maximum unbiased validation (MUV) data sets for virtual screening based on PubChem bioactivity data.

    PubMed

    Rohrer, Sebastian G; Baumann, Knut

    2009-02-01

    Refined nearest neighbor analysis was recently introduced for the analysis of virtual screening benchmark data sets. It constitutes a technique from the field of spatial statistics and provides a mathematical framework for the nonparametric analysis of mapped point patterns. Here, refined nearest neighbor analysis is used to design benchmark data sets for virtual screening based on PubChem bioactivity data. A workflow is devised that purges data sets of compounds active against pharmaceutically relevant targets from unselective hits. Topological optimization using experimental design strategies monitored by refined nearest neighbor analysis functions is applied to generate corresponding data sets of actives and decoys that are unbiased with regard to analogue bias and artificial enrichment. These data sets provide a tool for Maximum Unbiased Validation (MUV) of virtual screening methods. The data sets and a software package implementing the MUV design workflow are freely available at http://www.pharmchem.tu-bs.de/lehre/baumann/MUV.html.

  2. NASA Ocean Altimeter Pathfinder Project. Report 2; Data Set Validation

    NASA Technical Reports Server (NTRS)

    Koblinsky, C. J.; Ray, Richard D.; Beckley, Brian D.; Bremmer, Anita; Tsaoussi, Lucia S.; Wang, Yan-Ming

    1999-01-01

    The NOAA/NASA Pathfinder program was created by the Earth Observing System (EOS) Program Office to determine how existing satellite-based data sets can be processed and used to study global change. The data sets are designed to be long time-series data processed with stable calibration and community consensus algorithms to better assist the research community. The Ocean Altimeter Pathfinder Project involves the reprocessing of all altimeter observations with a consistent set of improved algorithms, based on the results from TOPEX/POSEIDON (T/P), into easy-to-use data sets for the oceanographic community for climate research. Details are currently presented in two technical reports: Report# 1: Data Processing Handbook Report #2: Data Set Validation This report describes the validation of the data sets against a global network of high quality tide gauge measurements and provides an estimate of the error budget. The first report describes the processing schemes used to produce the geodetic consistent data set comprised of SEASAT, GEOSAT, ERS-1, TOPEX/ POSEIDON, and ERS-2 satellite observations.

  3. Simulation of the transient processes of load rejection under different accident conditions in a hydroelectric generating set

    NASA Astrophysics Data System (ADS)

    Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.

    2016-11-01

    Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.

  4. Generating quality word sense disambiguation test sets based on MeSH indexing.

    PubMed

    Fan, Jung-Wei; Friedman, Carol

    2009-11-14

    Word sense disambiguation (WSD) determines the correct meaning of a word that has more than one meaning, and is a critical step in biomedical natural language processing, as interpretation of information in text can be correct only if the meanings of their component terms are correctly identified first. Quality evaluation sets are important to WSD because they can be used as representative samples for developing automatic programs and as referees for comparing different WSD programs. To help create quality test sets for WSD, we developed a MeSH-based automatic sense-tagging method that preferentially annotates terms being topical of the text. Preliminary results were promising and revealed important issues to be addressed in biomedical WSD research. We also suggest that, by cross-validating with 2 or 3 annotators, the method should be able to efficiently generate quality WSD test sets. Online supplement is available at: http://www.dbmi.columbia.edu/~juf7002/AMIA09.

  5. Stochastic Hourly Weather Generator HOWGH: Validation and its Use in Pest Modelling under Present and Future Climates

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Hirschi, M.; Spirig, C.

    2014-12-01

    To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry

  6. Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.

    PubMed

    Daepp, Madeleine Ig; Black, Jennifer

    2017-10-01

    The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.

  7. Validation of a global scale to assess the quality of interprofessional teamwork in mental health settings.

    PubMed

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2017-12-01

    Few scales currently exist to assess the quality of interprofessional teamwork through team members' perceptions of working together in mental health settings. The purpose of this study was to revise and validate an interprofessional scale to assess the quality of teamwork in inpatient psychiatric units and to use it multi-nationally. A literature review was undertaken to identify evaluative teamwork tools and develop an additional 12 items to ensure a broad global focus. Focus group discussions considered adaptation to different care systems using subjective judgements from 11 participants in a pre-test of items. Data quality, construct validity, reproducibility, and internal consistency were investigated in the survey using an international comparative design. Exploratory factor analysis yielded five factors with 21 items: 'patient/community centred care', 'collaborative communication', 'interprofessional conflict', 'role clarification', and 'environment'. High overall internal consistency, reproducibility, adequate face validity, and reasonable construct validity were shown in the USA and Japan. The revised Collaborative Practice Assessment Tool (CPAT) is a valid measure to assess the quality of interprofessional teamwork in psychiatry and identifies the best strategies to improve team performance. Furthermore, the revised scale will generate more rigorous evidence for collaborative practice in psychiatry internationally.

  8. The development and validation of the Closed-set Mandarin Sentence (CMS) test.

    PubMed

    Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng

    2017-09-01

    Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.

  9. DC Motor control using motor-generator set with controlled generator field

    DOEpatents

    Belsterling, Charles A.; Stone, John

    1982-01-01

    A d.c. generator is connected in series opposed to the polarity of a d.c. power source supplying a d.c. drive motor. The generator is part of a motor-generator set, the motor of which is supplied from the power source connected to the motor. A generator field control means varies the field produced by at least one of the generator windings in order to change the effective voltage output. When the generator voltage is exactly equal to the d.c. voltage supply, no voltage is applied across the drive motor. As the field of the generator is reduced, the drive motor is supplied greater voltage until the full voltage of the d.c. power source is supplied when the generator has zero field applied. Additional voltage may be applied across the drive motor by reversing and increasing the reversed field on the generator. The drive motor may be reversed in direction from standstill by increasing the generator field so that a reverse voltage is applied across the d.c. motor.

  10. A new test set for validating predictions of protein-ligand interaction.

    PubMed

    Nissink, J Willem M; Murray, Chris; Hartshorn, Mike; Verdonk, Marcel L; Cole, Jason C; Taylor, Robin

    2002-12-01

    We present a large test set of protein-ligand complexes for the purpose of validating algorithms that rely on the prediction of protein-ligand interactions. The set consists of 305 complexes with protonation states assigned by manual inspection. The following checks have been carried out to identify unsuitable entries in this set: (1) assessing the involvement of crystallographically related protein units in ligand binding; (2) identification of bad clashes between protein side chains and ligand; and (3) assessment of structural errors, and/or inconsistency of ligand placement with crystal structure electron density. In addition, the set has been pruned to assure diversity in terms of protein-ligand structures, and subsets are supplied for different protein-structure resolution ranges. A classification of the set by protein type is available. As an illustration, validation results are shown for GOLD and SuperStar. GOLD is a program that performs flexible protein-ligand docking, and SuperStar is used for the prediction of favorable interaction sites in proteins. The new CCDC/Astex test set is freely available to the scientific community (http://www.ccdc.cam.ac.uk). Copyright 2002 Wiley-Liss, Inc.

  11. Longitudinal construct validity of the minimum data set health status index.

    PubMed

    Jones, Aaron; Feeny, David; Costa, Andrew P

    2018-05-24

    The Minimum Data Set Health Status Index (MDS-HSI) is a generic, preference-based health-related quality of life (HRQOL) measure derived by mapping items from the Resident Assessment Instrument - Minimum Data Set (RAI-MDS) assessment onto the Health Utilities Index Mark 2 classification system. While the validity of the MDS-HSI has been examined in cross-sectional settings, the longitudinal validity has not been explored. The objective of this study was to investigate the longitudinal construct validity of the MDS-HSI in a home care population. This study utilized a retrospective cohort of home care patients in the Hamilton-Niagara-Haldimand-Brant health region of Ontario, Canada with at least two RAI-MDS Home Care assessments between January 2010 and December 2014. Convergent validity was assessed by calculating Spearman rank correlations between the change in MDS-HSI and changes in six validated indices of health domains that can be calculated from the RAI-MDS assessment. Known-groups validity was investigated by fitting multivariable linear regression models to estimate the mean change in MDS-HSI associated with clinically important changes in the six health domain indices and 15 disease symptoms from the RAI-MDS Home Care assessment, controlling for age and sex. The cohort contained 25,182 patients with two RAI-MDS Home Care assessments. Spearman correlations between the MDS-HSI change and changes in the health domain indices were all statistically significant and in the hypothesized small to moderate range [0.1 < ρ < 0.5]. Clinically important changes in all of the health domain indices and 13 of the 15 disease symptoms were significantly associated with clinically important changes in the MDS-HSI. The findings of this study support the longitudinal construct validity of the MDS-HSI in home care populations. In addition to evaluating changes in HRQOL among home care patients in clinical research, economic evaluation, and health technology assessment

  12. The virtual people set: developing computer-generated stimuli for the assessment of pedophilic sexual interest.

    PubMed

    Dombert, Beate; Mokros, Andreas; Brückner, Eva; Schlegl, Verena; Antfolk, Jan; Bäckström, Anna; Zappalà, Angelo; Osterheider, Michael; Santtila, Pekka

    2013-12-01

    The implicit assessment of pedophilic sexual interest through viewing-time methods necessitates visual stimuli. There are grave ethical and legal concerns against using pictures of real children, however. The present report is a summary of findings on a new set of 108 computer-generated stimuli. The images vary in terms of gender (female/male), explicitness (naked/clothed), and physical maturity (prepubescent, pubescent, and adult) of the persons depicted. A series of three studies tested the internal and external validity of the picture set. Studies 1 and 2 yielded good-to-high estimates of observer agreement with regard to stimulus maturity levels by two methods (categorization and paired comparison). Study 3 extended these findings with regard to judgments made by convicted child sexual offenders.

  13. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  14. Understanding Dynamic Model Validation of a Wind Turbine Generator and a Wind Power Plant: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Zhang, Ying Chen; Gevorgian, Vahan

    Regional reliability organizations require power plants to validate the dynamic models that represent them to ensure that power systems studies are performed to the best representation of the components installed. In the process of validating a wind power plant (WPP), one must be cognizant of the parameter settings of the wind turbine generators (WTGs) and the operational settings of the WPP. Validating the dynamic model of a WPP is required to be performed periodically. This is because the control parameters of the WTGs and the other supporting components within a WPP may be modified to comply with new grid codesmore » or upgrades to the WTG controller with new capabilities developed by the turbine manufacturers or requested by the plant owners or operators. The diversity within a WPP affects the way we represent it in a model. Diversity within a WPP may be found in the way the WTGs are controlled, the wind resource, the layout of the WPP (electrical diversity), and the type of WTGs used. Each group of WTGs constitutes a significant portion of the output power of the WPP, and their unique and salient behaviors should be represented individually. The objective of this paper is to illustrate the process of dynamic model validations of WTGs and WPPs, the available data recorded that must be screened before it is used for the dynamic validations, and the assumptions made in the dynamic models of the WTG and WPP that must be understood. Without understanding the correct process, the validations may lead to the wrong representations of the WTG and WPP modeled.« less

  15. Development and validation of an Argentine set of facial expressions of emotion.

    PubMed

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  16. Generation new MP3 data set after compression

    NASA Astrophysics Data System (ADS)

    Atoum, Mohammed Salem; Almahameed, Mohammad

    2016-02-01

    The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.

  17. Rational selection of training and test sets for the development of validated QSAR models

    NASA Astrophysics Data System (ADS)

    Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander

    2003-02-01

    Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.

  18. Integrating Validity Theory with Use of Measurement Instruments in Clinical Settings

    PubMed Central

    Kelly, P Adam; O'Malley, Kimberly J; Kallen, Michael A; Ford, Marvella E

    2005-01-01

    Objective To present validity concepts in a conceptual framework useful for research in clinical settings. Principal Findings We present a three-level decision rubric for validating measurement instruments, to guide health services researchers step-by-step in gathering and evaluating validity evidence within their specific situation. We address construct precision, the capacity of an instrument to measure constructs it purports to measure and differentiate from other, unrelated constructs; quantification precision, the reliability of the instrument; and translation precision, the ability to generalize scores from an instrument across subjects from the same or similar populations. We illustrate with specific examples, such as an approach to validating a measurement instrument for veterans when prior evidence of instrument validity for this population does not exist. Conclusions Validity should be viewed as a property of the interpretations and uses of scores from an instrument, not of the instrument itself: how scores are used and the consequences of this use are integral to validity. Our advice is to liken validation to building a court case, including discovering evidence, weighing the evidence, and recognizing when the evidence is weak and more evidence is needed. PMID:16178998

  19. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  20. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  1. Content validity of the DSM-IV borderline and narcissistic personality disorder criteria sets.

    PubMed

    Blais, M A; Hilsenroth, M J; Castlebury, F D

    1997-01-01

    This study sought to empirically evaluate the content validity of the newly revised DSM-IV narcissistic personality disorder (NPD) and borderline personality disorder (BPD) criteria sets. Using the essential features of each disorder as construct definitions, factor analysis was used to determine how adequately the criteria sets covered the constructs. In addition, this empirical investigation sought to: 1) help define the dimensions underlying these polythetic disorders; 2) identify core features of each diagnosis; and 3) highlight the characteristics that may be most useful in diagnosing these two disorders. Ninety-one outpatients meeting DSM-IV criteria for a personality disorder (PD) were identified through a retrospective analysis of chart information. Records of these 91 patients were independently rated on all of the BPD and NPD symptom criteria for the DSM-IV. Acceptable interrater reliability (kappa estimates) was obtained for both presence or absence of a PD and symptom criteria for BPD and NPD. The factor analysis, performed separately for each disorder, identified a three-factor solution for both the DSM-IV BPD and NPD criteria sets. The results of this study provide strong support for the content validity of the NPD criteria set and moderate support for the content validly of the BPD criteria set. Three domains were found to comprise the BPD criteria set, with the essential features of interpersonal and identity instability forming one domain, and impulsivity and affective instability each identified as separate domains. Factor analysis of the NPD criteria set found three factors basically corresponding to the essential features of grandiosity, lack of empathy, and need for admiration. Therefore, the NPD criteria set adequately covers the essential or defining features of the disorder.

  2. The Development of Valid Subtypes for Depression in Primary Care Settings

    PubMed Central

    Karasz, Alison

    2009-01-01

    A persistent theme in the debate on the classification of depressive disorders is the distinction between biological and environmental depressions. Despite decades of research, there remains little consensus on how to distinguish between depressive subtypes. This preliminary study describes a method that could be useful, if implemented on a larger scale, in the development of valid subtypes of depression in primary care settings, using explanatory models of depressive illness. Seventeen depressed Hispanic patients at an inner city general practice participated in explanatory model interviews. Participants generated illness narratives, which included details about symptoms, cause, course, impact, health seeking, and anticipated outcome. Two distinct subtypes emerged from the analysis. The internal model subtype was characterized by internal attributions, specifically the notion of an “injured self.” The external model subtype conceptualized depression as a reaction to life situations. Each subtype was associated with a distinct constellation of clinical features and health seeking experiences. Future directions for research using explanatory models to establish depressive subtypes are explored. PMID:18414123

  3. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics

  4. Validating a large geophysical data set: Experiences with satellite-derived cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie

    1992-01-01

    We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed

  5. 64. FORWARD EMERGENCY DIESEL GENERATOR SET STARBOARD LOOKING TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    64. FORWARD EMERGENCY DIESEL GENERATOR SET - STARBOARD LOOKING TO PORT SHOWING BOTTOM HALF OF FAIRBANKS MORSE 36D81/8 TEN CYLINDER DIESEL ENGINE SERIAL #951230 AND GENERAL ELECTRIC 1,000KW GENERATOR KVA 1250, RPM 720, SERIAL #6920274. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA

  6. The ToMenovela – A Photograph-Based Stimulus Set for the Study of Social Cognition with High Ecological Validity

    PubMed Central

    Herbort, Maike C.; Iseev, Jenny; Stolz, Christopher; Roeser, Benedict; Großkopf, Nora; Wüstenberg, Torsten; Hellweg, Rainer; Walter, Henrik; Dziobek, Isabel; Schott, Björn H.

    2016-01-01

    We present the ToMenovela, a stimulus set that has been developed to provide a set of normatively rated socio-emotional stimuli showing varying amount of characters in emotionally laden interactions for experimental investigations of (i) cognitive and (ii) affective Theory of Mind (ToM), (iii) emotional reactivity, and (iv) complex emotion judgment with respect to Ekman’s basic emotions (happiness, anger, disgust, fear, sadness, surprise, Ekman and Friesen, 1975). Stimuli were generated with focus on ecological validity and consist of 190 scenes depicting daily-life situations. Two or more of eight main characters with distinct biographies and personalities are depicted on each scene picture. To obtain an initial evaluation of the stimulus set and to pave the way for future studies in clinical populations, normative data on each stimulus of the set was obtained from a sample of 61 neurologically and psychiatrically healthy participants (31 female, 30 male; mean age 26.74 ± 5.84), including a visual analog scale rating of Ekman’s basic emotions (happiness, anger, disgust, fear, sadness, surprise) and free-text descriptions of the content of each scene. The ToMenovela is being developed to provide standardized material of social scenes that are available to researchers in the study of social cognition. It should facilitate experimental control while keeping ecological validity high. PMID:27994562

  7. Measurement framework for product service system performance of generator set distributors

    NASA Astrophysics Data System (ADS)

    Sofianti, Tanika D.

    2017-11-01

    Selling Generator Set (Genset) in B2B market, distributors assisted manufacturers to sell products. This is caused by the limited resources owned by the manufacturer for adding service elements. These service elements are needed to enhance the competitiveness of the generator sets. Some genset distributors often sell products together with supports to their customers. Industrial distributor develops services to meet the needs of the customer. Generator set distributors support machines and equipment produced by manufacturer. The services delivered by the distributors could enhance value obtained by the customers from the equipment. Services provided to customers in bidding process, ordering process of the equipment from the manufacturer, equipment delivery, installations, and the after sales stage. This paper promotes framework to measure Product Service System (PSS) of Generator Set distributors in delivering their products and services for the customers. The methodology of conducting this research is by adopting the perspective of the providers and customers and by taking into account the tangible and intangible products. This research leads to the idea of improvement of current Product Service System of a Genset distributor. This research needs further studies in more detailed measures and the implementation of measurement tools.

  8. Towards natural language question generation for the validation of ontologies and mappings.

    PubMed

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  9. Is the Maternal Q-Set a Valid Measure of Preschool Child Attachment Behavior?

    ERIC Educational Resources Information Center

    Moss, Ellen; Bureau, Jean-Francois; Cyr, Chantal; Dubois-Comtois, Karine

    2006-01-01

    The objective of this study is to examine preschool-age correlates of the maternal version of the Attachment Q-Set (AQS) (Waters & Deane, 1985) in order to provide validity data. Concurrent associations between the Attachment Q-Set and measures of separation-reunion attachment classifications (Cassidy & Marvin, 1992), quality of mother-child…

  10. Older adult mistreatment risk screening: contribution to the validation of a screening tool in a domestic setting.

    PubMed

    Lindenbach, Jeannette M; Larocque, Sylvie; Lavoie, Anne-Marise; Garceau, Marie-Luce

    2012-06-01

    ABSTRACTThe hidden nature of older adult mistreatment renders its detection in the domestic setting particularly challenging. A validated screening instrument that can provide a systematic assessment of risk factors can facilitate this detection. One such instrument, the "expanded Indicators of Abuse" tool, has been previously validated in the Hebrew language in a hospital setting. The present study has contributed to the validation of the "e-IOA" in an English-speaking community setting in Ontario, Canada. It consisted of two phases: (a) a content validity review and adaptation of the instrument by experts throughout Ontario, and (b) an inter-rater reliability assessment by home visiting nurses. The adaptation, the "Mistreatment of Older Adult Risk Factors" tool, offers a comprehensive tool for screening in the home setting. This instrument is significant to professional practice as practitioners working with older adults will be better equipped to assess for risk of mistreatment.

  11. Selection, application, and validation of a set of molecular descriptors for nuclear receptor ligands.

    PubMed

    Stewart, Eugene L; Brown, Peter J; Bentley, James A; Willson, Timothy M

    2004-08-01

    A methodology for the selection and validation of nuclear receptor ligand chemical descriptors is described. After descriptors for a targeted chemical space were selected, a virtual screening methodology utilizing this space was formulated for the identification of potential NR ligands from our corporate collection. Using simple descriptors and our virtual screening method, we are able to quickly identify potential NR ligands from a large collection of compounds. As validation of the virtual screening procedure, an 8, 000-membered NR targeted set and a 24, 000-membered diverse control set of compounds were selected from our in-house general screening collection and screened in parallel across a number of orphan NR FRET assays. For the two assays that provided at least one hit per set by the established minimum pEC(50) for activity, the results showed a 2-fold increase in the hit-rate of the targeted compound set over the diverse set.

  12. Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.

    1985-01-01

    Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.

  13. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  14. The Consumer Motivation Scale: A detailed review of item generation, exploration, confirmation, and validation procedures.

    PubMed

    Barbopoulos, I; Johansson, L-O

    2017-08-01

    This data article offers a detailed description of analyses pertaining to the development of the Consumer Motivation Scale (CMS), from item generation and the extraction of factors, to confirmation of the factor structure and validation of the emergent dimensions. The established goal structure - consisting of the sub-goals Value for Money, Quality, Safety, Stimulation, Comfort, Ethics, and Social Acceptance - is shown to be related to a variety of consumption behaviors in different contexts and for different products, and should thereby prove useful in standard marketing research, as well as in the development of tailored marketing strategies, and the segmentation of consumer groups, settings, brands, and products.

  15. Reuse Requirements for Generating Long Term Climate Data Sets

    NASA Astrophysics Data System (ADS)

    Fleig, A. J.

    2007-12-01

    Creating long term climate data sets from remotely sensed data requires a specialized form of code reuse. To detect long term trends in a geophysical parameter, such as global ozone amount or mean sea surface temperature, it is essential to be able to differentiate between real changes in the measurement and artifacts related to changes in processing algorithms or instrument characteristics. The ability to rerun the exact algorithm used to produce a given data set many years after the data was originally made is essential to create consistent long term data sets. It is possible to quickly develop a basic algorithm that will convert a perfect instrument measurement into a geophysical parameter value for a well specified set of conditions. However the devil is in the details and it takes a massive effort to develop and verify a processing system to generate high quality global climate data over all necessary conditions. As an example, from 1976 until now, over a hundred man years and eight complete reprocessings have been spent on deriving thirty years of total ozone data from multiple backscattered ultraviolet instruments. To obtain a global data set it is necessary to make numerous assumptions and to handle many special conditions (e.g. "What happens at high solar zenith angles with scattered clouds for snow covered terrain at high altitudes"?) It is easier to determine the precision of a remotely sensed data set than to determine its absolute accuracy. Fortunately if the entire data set is made with a single instrument and a constant algorithm the ability to detect long term trends is primarily determined by the precision of the measurement system rather than its absolute accuracy. However no instrument runs forever and new processing algorithms are developed over time. Introducing the resulting changes can impact the estimate of product precision and reduce the ability to estimate long term trends.Given an extended period of time when both the initial measurement

  16. Validating EHR documents: automatic schematron generation using archetypes.

    PubMed

    Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph

    2014-01-01

    The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations.

  17. Optimization of the generator settings for endobiliary radiofrequency ablation.

    PubMed

    Barret, Maximilien; Leblanc, Sarah; Vienne, Ariane; Rouquette, Alexandre; Beuvon, Frederic; Chaussade, Stanislas; Prat, Frederic

    2015-11-10

    To determine the optimal generator settings for endobiliary radiofrequency ablation. Endobiliary radiofrequency ablation was performed in live swine on the ampulla of Vater, the common bile duct and in the hepatic parenchyma. Radiofrequency ablation time, "effect", and power were allowed to vary. The animals were sacrificed two hours after the procedure. Histopathological assessment of the depth of the thermal lesions was performed. Twenty-five radiofrequency bursts were applied in three swine. In the ampulla of Vater (n = 3), necrosis of the duodenal wall was observed starting with an effect set at 8, power output set at 10 W, and a 30 s shot duration, whereas superficial mucosal damage of up to 350 μm in depth was recorded for an effect set at 8, power output set at 6 W and a 30 s shot duration. In the common bile duct (n = 4), a 1070 μm, safe and efficient ablation was obtained for an effect set at 8, a power output of 8 W, and an ablation time of 30 s. Within the hepatic parenchyma (n = 18), the depth of tissue damage varied from 1620 μm (effect = 8, power = 10 W, ablation time = 15 s) to 4480 μm (effect = 8, power = 8 W, ablation time = 90 s). The duration of the catheter application appeared to be the most important parameter influencing the depth of the thermal injury during endobiliary radiofrequency ablation. In healthy swine, the currently recommended settings of the generator may induce severe, supratherapeutic tissue damage in the biliary tree, especially in the high-risk area of the ampulla of Vater.

  18. Optimization of the generator settings for endobiliary radiofrequency ablation

    PubMed Central

    Barret, Maximilien; Leblanc, Sarah; Vienne, Ariane; Rouquette, Alexandre; Beuvon, Frederic; Chaussade, Stanislas; Prat, Frederic

    2015-01-01

    AIM: To determine the optimal generator settings for endobiliary radiofrequency ablation. METHODS: Endobiliary radiofrequency ablation was performed in live swine on the ampulla of Vater, the common bile duct and in the hepatic parenchyma. Radiofrequency ablation time, “effect”, and power were allowed to vary. The animals were sacrificed two hours after the procedure. Histopathological assessment of the depth of the thermal lesions was performed. RESULTS: Twenty-five radiofrequency bursts were applied in three swine. In the ampulla of Vater (n = 3), necrosis of the duodenal wall was observed starting with an effect set at 8, power output set at 10 W, and a 30 s shot duration, whereas superficial mucosal damage of up to 350 μm in depth was recorded for an effect set at 8, power output set at 6 W and a 30 s shot duration. In the common bile duct (n = 4), a 1070 μm, safe and efficient ablation was obtained for an effect set at 8, a power output of 8 W, and an ablation time of 30 s. Within the hepatic parenchyma (n = 18), the depth of tissue damage varied from 1620 μm (effect = 8, power = 10 W, ablation time = 15 s) to 4480 μm (effect = 8, power = 8 W, ablation time = 90 s). CONCLUSION: The duration of the catheter application appeared to be the most important parameter influencing the depth of the thermal injury during endobiliary radiofrequency ablation. In healthy swine, the currently recommended settings of the generator may induce severe, supratherapeutic tissue damage in the biliary tree, especially in the high-risk area of the ampulla of Vater. PMID:26566429

  19. The experimental studies of operating modes of a diesel-generator set at variable speed

    NASA Astrophysics Data System (ADS)

    Obukhov, S. G.; Plotnikov, I. A.; Surkov, M. A.; Sumarokova, L. P.

    2017-02-01

    A diesel generator set working at variable speed to save fuel is studied. The results of experimental studies of the operating modes of an autonomous diesel generator set are presented. Areas for regulating operating modes are determined. It is demonstrated that the transfer of the diesel generator set to variable speed of the diesel engine makes it possible to improve the energy efficiency of the autonomous generator source, as well as the environmental and ergonomic performance of the equipment as compared with general industrial analogues.

  20. The Use of Generating Sets with ING Gas Engines in "Shore to Ship" Systems

    NASA Astrophysics Data System (ADS)

    Tarnapowicz, Dariusz; German-Galkin, Sergiej

    2016-09-01

    The main sources of air pollution in ports are ships, on which electrical energy is produced in the autonomous generating sets Diesel-Generator. The most effective way to reduce harmful exhaust emissions from ships is to exclude marine generating sets and provide the shore-side electricity in "Shore to Ship" system. The main problem in the implementation of power supply for ships from land is connected with matching parameters of voltage in onshore network with marine network. Currently, the recommended solution is to supply ships from the onshore electricity network with the use of power electronic converters. This article presents an analysis of the "Shore to Ship" system with the use of generating sets with LNG gas engines. It shows topologies with LNG - Generator sets, environmental benefits of such a solution, advantages and disadvantages.

  1. Generation of the global cloud free data set of MODIS

    NASA Astrophysics Data System (ADS)

    Oguro, Y.; Tsuchiya, K.

    To extract temporal change of the land cover from remotely sensed data from space the generation of the reliable cloud free data set is the first priority item With the objectives of generating accurate global basic data and to find the effects of spectral and spatial resolution differences and observation time an attempt is made to generate reliable global cloud free data set of Terra and Aqua MODIS utilizing personal computers Out of 36 bands seven bands with similar spectral features to those of Landsat TM i e Band 1 through 7 are selected These bands cover the most important spectra to derive landcover features The procedure of the data set generation is as follows 1 Download the global Terra and Aqua MODIS day time data MOD02 Level-1B Calibrated Geolocation Data Set of 250 meter Band 1 and 2 and 500 meter Band 3 through 7 resolution from NASA web site 2 Separate the data into several BSQ Band SeQuential image and several text geolocation information of pixels files 3 The geolocation information is given to the pixels of several kms interval Based on the information resampling of the data are made at 1 2 and 1 4 degrees intervals of latitude and longitude thus the resampled pixels are distributed in the latitude and longitudinal axis plane at 1 4 degrees high resolution and 1 2 degrees low resolution intervals 4 A global data for one day is composed 5 Compute NDVI for each pixel 6 Compare the value of NDVI of successive days and keep the larger NDVI At the same time keep the values of each band of the day of the larger

  2. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  3. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  4. 65. FORWARD EMERGENCY DIESEL GENERATOR SET AFT LOOKING FORWARD ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    65. FORWARD EMERGENCY DIESEL GENERATOR SET - AFT LOOKING FORWARD SHOWING TOP HALF OF FAIRBANKS MORSE 36D81/8 TEN CYLINDER DIESEL ENGINE SERIAL #951230 AND EXHAUST SYSTEM. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA

  5. Validated numerical simulation model of a dielectric elastomer generator

    NASA Astrophysics Data System (ADS)

    Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.

    2013-04-01

    Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 μm. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.

  6. Building the foundation to generate a fundamental care standardised data set.

    PubMed

    Jeffs, Lianne; Muntlin Athlin, Asa; Needleman, Jack; Jackson, Debra; Kitson, Alison

    2018-06-01

    This paper provides an overview of the current state of performance measurement, key trends and a methodological approach to leverage in efforts to generate a standardised data set for fundamental care. Considerable transformation is occurring in health care globally with organisations focusing on achieving the quadruple aim of improving the experience of care, the health of populations, and the experience of providing care while reducing per capita costs of health care. In response, healthcare organisations are employing performance measurement and quality improvement methods to achieve the quadruple aim. Despite the plethora of measures available to health managers, there is no standardised data set and virtually no indicators reflecting how patients actually experience the delivery of fundamental care, such as nutrition, hydration, mobility, respect, education and psychosocial support. Given the linkages of fundamental care to safety and quality metrics, efforts to build the evidence base and knowledge that captures the impact of enacting fundamental care across the healthcare continuum and lifespan should include generating a routinely collected data set of relevant measures. This paper provides an overview of the current state of performance measurement, key trends and a methodological approach to leverage in efforts to generate a standardised data set for fundamental care. Standardised data sets enable comparability of data across clinical populations, healthcare sectors, geographic locations and time and provide data about care to support clinical, administrative and health policy decision-making. © 2018 John Wiley & Sons Ltd.

  7. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study.

    PubMed

    Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W

    2015-01-01

    CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study

  8. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    PubMed

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.

  9. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    PubMed

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  10. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  11. Dyspnoea-12: a translation and linguistic validation study in a Swedish setting

    PubMed Central

    Ekström, Magnus

    2017-01-01

    Background Dyspnoea consists of multiple dimensions including the intensity, unpleasantness, sensory qualities and emotional responses which may differ between patient groups, settings and in relation to treatment. The Dyspnoea-12 is a validated and convenient instrument for multidimensional measurement in English. We aimed to take forward a Swedish version of the Dyspnoea-12. Methods The linguistic validation of the Dyspnoea-12 was performed (Mapi Language Services, Lyon, France). The standardised procedure involved forward and backward translations by three independent certified translators and revisions after feedback from an in-country linguistic consultant, the developerand three native physicians. The understanding and convenience of the translated version was evaluated using qualitative in-depth interviews with five patients with dyspnoea. Results A Swedish version of the Dyspnoea-12 was elaborated and evaluated carefully according to international guidelines. The Swedish version, ‘Dyspné−12’, has the same layout as the original version, including 12 items distributed on seven physical and five affective items. The Dyspnoea-12 is copyrighted by the developer but can be used free of charge after permission for not industry-funded research. Conclusion A Swedish version of the Dyspnoea-12 is now available for clinical validation and multidimensional measurement across diseases and settings with the aim of improved evaluation and management of dyspnoea. PMID:28592574

  12. The stroke impairment assessment set: its internal consistency and predictive validity.

    PubMed

    Tsuji, T; Liu, M; Sonoda, S; Domen, K; Chino, N

    2000-07-01

    To study the scale quality and predictive validity of the Stroke Impairment Assessment Set (SIAS) developed for stroke outcome research. Rasch analysis of the SIAS; stepwise multiple regression analysis to predict discharge functional independence measure (FIM) raw scores from demographic data, the SIAS scores, and the admission FIM scores; cross-validation of the prediction rule. Tertiary rehabilitation center in Japan. One hundred ninety stroke inpatients for the study of the scale quality and the predictive validity; a second sample of 116 stroke inpatients for the cross-validation study. Mean square fit statistics to study the degree of fit to the unidimensional model; logits to express item difficulties; discharge FIM scores for the study of predictive validity. The degree of misfit was acceptable except for the shoulder range of motion (ROM), pain, visuospatial function, and speech items; and the SIAS items could be arranged on a common unidimensional scale. The difficulty patterns were identical at admission and at discharge except for the deep tendon reflexes, ROM, and pain items. They were also similar for the right- and left-sided brain lesion groups except for the speech and visuospatial items. For the prediction of the discharge FIM scores, the independent variables selected were age, the SIAS total scores, and the admission FIM scores; and the adjusted R2 was .64 (p < .0001). Stability of the predictive equation was confirmed in the cross-validation sample (R2 = .68, p < .001). The unidimensionality of the SIAS was confirmed, and the SIAS total scores proved useful for stroke outcome prediction.

  13. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults

    PubMed Central

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415

  14. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  15. Arbitrarily exhaustive hypergraph generation of 4-, 6-, 8-, 16-, and 32-dimensional quantum contextual sets

    NASA Astrophysics Data System (ADS)

    Pavičić, Mladen

    2017-06-01

    Quantum contextuality turns out to be a necessary resource for universal quantum computation and important in the field of quantum information processing. It is therefore of interest both for theoretical considerations and for experimental implementation to find new types and instances of contextual sets and develop methods of their optimal generation. We present an arbitrarily exhaustive hypergraph-based generation of the most explored contextual sets [Kochen-Specker (KS) ones] in 4, 6, 8, 16, and 32 dimensions. We consider and analyze 12 KS classes and obtain numerous properties of theirs, which we then compare with the results previously obtained in the literature. We generate several thousand additional types and instances of KS sets, including all KS sets in three of the classes and the upper part of a fourth set. We make use of the McKay-Megill-Pavičić (MMP) hypergraph language, algorithms, and programs to generate KS sets strictly following their definition from the Kochen-Specker theorem. This approach proves to be particularly advantageous over the parity-proof-based ones (which prevail in the literature) since it turns out that only a very few KS sets have a parity proof (in six KS classes <0.01% and in one of them 0%). MMP hypergraph formalism enables a translation of an exponentially complex task of solving systems of nonlinear equations, describing KS vector orthogonalities, into a statistically linearly complex task of evaluating vertex states of hypergraph edges, thus exponentially speeding up the generation of KS sets and enabling us to generate billions of novel instances of them. The MMP hypergraph notation also enables us to graphically represent KS sets and to visually discern their features.

  16. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    NASA Astrophysics Data System (ADS)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  17. A new generation of crystallographic validation tools for the protein data bank.

    PubMed

    Read, Randy J; Adams, Paul D; Arendall, W Bryan; Brunger, Axel T; Emsley, Paul; Joosten, Robbie P; Kleywegt, Gerard J; Krissinel, Eugene B; Lütteke, Thomas; Otwinowski, Zbyszek; Perrakis, Anastassis; Richardson, Jane S; Sheffler, William H; Smith, Janet L; Tickle, Ian J; Vriend, Gert; Zwart, Peter H

    2011-10-12

    This report presents the conclusions of the X-ray Validation Task Force of the worldwide Protein Data Bank (PDB). The PDB has expanded massively since current criteria for validation of deposited structures were adopted, allowing a much more sophisticated understanding of all the components of macromolecular crystals. The size of the PDB creates new opportunities to validate structures by comparison with the existing database, and the now-mandatory deposition of structure factors creates new opportunities to validate the underlying diffraction data. These developments highlighted the need for a new assessment of validation criteria. The Task Force recommends that a small set of validation data be presented in an easily understood format, relative to both the full PDB and the applicable resolution class, with greater detail available to interested users. Most importantly, we recommend that referees and editors judging the quality of structural experiments have access to a concise summary of well-established quality indicators. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. A New Generation of Crystallographic Validation Tools for the Protein Data Bank

    PubMed Central

    Read, Randy J.; Adams, Paul D.; Arendall, W. Bryan; Brunger, Axel T.; Emsley, Paul; Joosten, Robbie P.; Kleywegt, Gerard J.; Krissinel, Eugene B.; Lütteke, Thomas; Otwinowski, Zbyszek; Perrakis, Anastassis; Richardson, Jane S.; Sheffler, William H.; Smith, Janet L.; Tickle, Ian J.; Vriend, Gert; Zwart, Peter H.

    2011-01-01

    Summary This report presents the conclusions of the X-ray Validation Task Force of the worldwide Protein Data Bank (PDB). The PDB has expanded massively since current criteria for validation of deposited structures were adopted, allowing a much more sophisticated understanding of all the components of macromolecular crystals. The size of the PDB creates new opportunities to validate structures by comparison with the existing database, and the now-mandatory deposition of structure factors creates new opportunities to validate the underlying diffraction data. These developments highlighted the need for a new assessment of validation criteria. The Task Force recommends that a small set of validation data be presented in an easily understood format, relative to both the full PDB and the applicable resolution class, with greater detail available to interested users. Most importantly, we recommend that referees and editors judging the quality of structural experiments have access to a concise summary of well-established quality indicators. PMID:22000512

  19. Validation of the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions

    PubMed Central

    Wingenbach, Tanja S. H.

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author

  20. Dyspnoea-12: a translation and linguistic validation study in a Swedish setting.

    PubMed

    Sundh, Josefin; Ekström, Magnus

    2017-06-06

    Dyspnoea consists of multiple dimensions including the intensity, unpleasantness, sensory qualities and emotional responses which may differ between patient groups, settings and in relation to treatment. The Dyspnoea-12 is a validated and convenient instrument for multidimensional measurement in English. We aimed to take forward a Swedish version of the Dyspnoea-12. The linguistic validation of the Dyspnoea-12 was performed (Mapi Language Services, Lyon, France). The standardised procedure involved forward and backward translations by three independent certified translators and revisions after feedback from an in-country linguistic consultant, the developerand three native physicians. The understanding and convenience of the translated version was evaluated using qualitative in-depth interviews with five patients with dyspnoea. A Swedish version of the Dyspnoea-12 was elaborated and evaluated carefully according to international guidelines. The Swedish version, 'Dyspné-12', has the same layout as the original version, including 12 items distributed on seven physical and five affective items. The Dyspnoea-12 is copyrighted by the developer but can be used free of charge after permission for not industry-funded research. A Swedish version of the Dyspnoea-12 is now available for clinical validation and multidimensional measurement across diseases and settings with the aim of improved evaluation and management of dyspnoea. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Affordances in the home environment for motor development: Validity and reliability for the use in daycare setting.

    PubMed

    Müller, Alessandra Bombarda; Valentini, Nadia Cristina; Bandeira, Paulo Felipe Ribeiro

    2017-05-01

    The range of stimuli provided by physical space, toys and care practices contributes to the motor, cognitive and social development of children. However, assessing the quality of child education environments is a challenge, and can be considered a health promotion initiative. This study investigated the validity of the criterion, content, construct and reliability of the Affordances in the Home Environment for Motor Development - Infant Scale (AHEMD-IS), version 3-18 months, for the use in daycare settings. Content validation was conducted with the participation of seven motor development and health care experts; and, face validity by 20 specialists in health and education. The results indicate the suitability of the adapted AHEMD-IS, evidencing its validity for the daycare setting a potential tool to assess the opportunities that the collective context offers to child development. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Generative Text Sets: Tools for Negotiating Critically Inclusive Early Childhood Teacher Education Pedagogical Practices

    ERIC Educational Resources Information Center

    Souto-Manning, Mariana

    2017-01-01

    Through a case study, this article sheds light onto generative text sets as tools for developing and enacting critically inclusive early childhood teacher education pedagogies. In doing so, it positions teaching and learning processes as sociocultural, historical, and political acts as it inquires into the use of generative text sets in one early…

  3. International validation of quality indicators for evaluating priority setting in low income countries: process and key lessons.

    PubMed

    Kapiriri, Lydia

    2017-06-19

    While there have been efforts to develop frameworks to guide healthcare priority setting; there has been limited focus on evaluation frameworks. Moreover, while the few frameworks identify quality indicators for successful priority setting, they do not provide the users with strategies to verify these indicators. Kapiriri and Martin (Health Care Anal 18:129-147, 2010) developed a framework for evaluating priority setting in low and middle income countries. This framework provides BOTH parameters for successful priority setting and proposes means of their verification. Before its use in real life contexts, this paper presents results from a validation process of the framework. The framework validation involved 53 policy makers and priority setting researchers at the global, national and sub-national levels (in Uganda). They were requested to indicate the relative importance of the proposed parameters as well as the feasibility of obtaining the related information. We also pilot tested the proposed means of verification. Almost all the respondents evaluated all the parameters, including the contextual factors, as 'very important'. However, some respondents at the global level thought 'presence of incentives to comply', 'reduced disagreements', 'increased public understanding,' 'improved institutional accountability' and 'meeting the ministry of health objectives', which could be a reflection of their levels of decision making. All the proposed means of verification were assessed as feasible with the exception of meeting observations which would require an insider. These findings results were consistent with those obtained from the pilot testing. These findings are relevant to policy makers and researchers involved in priority setting in low and middle income countries. To the best of our knowledge, this is one of the few initiatives that has involved potential users of a framework (at the global and in a Low Income Country) in its validation. The favorable validation

  4. Good validity of the international spinal cord injury quality of life basic data set.

    PubMed

    Post, M W M; Adriaansen, J J E; Charlifue, S; Biering-Sørensen, F; van Asbeck, F W A

    2016-04-01

    Cross-sectional validation study. To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. Dutch community. People 28-65 years of age, who obtained their SCI between 18 and 35 years of age, were at least 10 years post SCI and were wheelchair users in daily life. MEASURE(S): The International SCI QoL Basic Data Set consists of three single items on satisfaction with life as a whole, physical health and psychological health (0=complete dissatisfaction; 10=complete satisfaction). Reference measures were the Mental Health Inventory-5 and three items of the World Health Organization Quality of Life measure. Data of 261 participants were available. Mean time after SCI was 24.1 years (s.d. 9.1); 90.4% had a traumatic SCI, 81.5% a motor complete SCI and 40% had tetraplegia. Mean age was 47.9 years (s.d. 8.8) and 73.2% were male. Mean scores were 6.9 (s.d. 1.9) for general QoL, 5.8 (s.d. 2.2) for physical health and 7.1 (s.d. 1.9) for psychological health. No floor or ceiling effects were found. Strong inter-correlations (0.48-0.71) were found between the items, and Cronbach's alpha of the scale was good (0.81). Correlations with the reference measures showed the strongest correlations between the WHOQOL general satisfaction item and general QoL (0.64), the WHOQOL health and daily activities items and physical health (0.69 and 0.60) and the Mental Health Inventory-5 and psychological health (0.70). This first validity study of the International SCI QoL Basic Data Set shows that it appears valid for persons with SCI.

  5. Can One Satellite Data Set Validation Another? Validation of Envisat SCIAMACHY Data by Comparisons with NOAA-16 SBUV/2 and ERS-2 GOME

    NASA Technical Reports Server (NTRS)

    Hilsenrath, E.; Bojkov, B. R.; Labow, G.; Weber, M.; Burrows, J.

    2004-01-01

    Validation of satellite data remains a high priority for the construction of climate data sets. Traditionally ground based measurements have provided the primary comparison data for validation. For some atmospheric parameters such as ozone, a thoroughly validated satellite data record can be used to validate a new instrument s data product in addition to using ground based data. Comparing validated data with new satellite data has several advantages; availability of much more data, which will improve precision, larger geographical coverage, and the footprints are closer in size, which removes uncertainty due to different observed atmospheric volumes. To demonstrate the applicability and some limitations of this technique, observations from the newly launched SCIAMACHY instrument were compared with the NOM-16 SBW/2 and ERS-2 GOME instruments. The SBW/2 data had all ready undergone validation by comparing to the total ozone ground network. Overall the SCIAMACHY data were found to low by 3% with respect to satellite data and 1% low with respect to ground station data. There appears to be seasonal and or solar zenith angle dependences in the comparisons with SBW/2 where differences increase with higher solar zenith angles. It is known that accuracies in both satellite and ground based total ozone algorithms decrease at high solar zenith angles. There is a strong need for more accurate measurement from and the ground under these conditions. At the present time SCIAMACHY data are limited and longer data set with more coverage in both hemispheres is needed to unravel the cause of these differences.

  6. Validation of the NIMH-ChEFS adolescent face stimulus set in an adolescent, parent, and health professional sample

    PubMed Central

    COFFMAN, MARIKA C.; TRUBANOVA, ANDREA; RICHEY, J. ANTHONY; WHITE, SUSAN W.; KIM-SPOON, JUNGMEEN; OLLENDICK, THOMAS H.; PINE, DANIEL S.

    2016-01-01

    Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. PMID:26359940

  7. GSA-PCA: gene set generation by principal component analysis of the Laplacian matrix of a metabolic network

    PubMed Central

    2012-01-01

    Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834

  8. An efficient algorithm for generating diverse microstructure sets and delineating properties closures

    DOE PAGES

    Johnson, Oliver K.; Kurniawan, Christian

    2018-02-03

    Properties closures delineate the theoretical objective space for materials design problems, allowing designers to make informed trade-offs between competing constraints and target properties. In this paper, we present a new algorithm called hierarchical simplex sampling (HSS) that approximates properties closures more efficiently and faithfully than traditional optimization based approaches. By construction, HSS generates samples of microstructure statistics that span the corresponding microstructure hull. As a result, we also find that HSS can be coupled with synthetic polycrystal generation software to generate diverse sets of microstructures for subsequent mesoscale simulations. Finally, by more broadly sampling the space of possible microstructures, itmore » is anticipated that such diverse microstructure sets will expand our understanding of the influence of microstructure on macroscale effective properties and inform the construction of higher-fidelity mesoscale structure-property models.« less

  9. An efficient algorithm for generating diverse microstructure sets and delineating properties closures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Oliver K.; Kurniawan, Christian

    Properties closures delineate the theoretical objective space for materials design problems, allowing designers to make informed trade-offs between competing constraints and target properties. In this paper, we present a new algorithm called hierarchical simplex sampling (HSS) that approximates properties closures more efficiently and faithfully than traditional optimization based approaches. By construction, HSS generates samples of microstructure statistics that span the corresponding microstructure hull. As a result, we also find that HSS can be coupled with synthetic polycrystal generation software to generate diverse sets of microstructures for subsequent mesoscale simulations. Finally, by more broadly sampling the space of possible microstructures, itmore » is anticipated that such diverse microstructure sets will expand our understanding of the influence of microstructure on macroscale effective properties and inform the construction of higher-fidelity mesoscale structure-property models.« less

  10. A Validated Open-Source Multisolver Fourth-Generation Composite Femur Model.

    PubMed

    MacLeod, Alisdair R; Rose, Hannah; Gill, Harinderjit S

    2016-12-01

    Synthetic biomechanical test specimens are frequently used for preclinical evaluation of implant performance, often in combination with numerical modeling, such as finite-element (FE) analysis. Commercial and freely available FE packages are widely used with three FE packages in particular gaining popularity: abaqus (Dassault Systèmes, Johnston, RI), ansys (ANSYS, Inc., Canonsburg, PA), and febio (University of Utah, Salt Lake City, UT). To the best of our knowledge, no study has yet made a comparison of these three commonly used solvers. Additionally, despite the femur being the most extensively studied bone in the body, no freely available validated model exists. The primary aim of the study was primarily to conduct a comparison of mesh convergence and strain prediction between the three solvers (abaqus, ansys, and febio) and to provide validated open-source models of a fourth-generation composite femur for use with all the three FE packages. Second, we evaluated the geometric variability around the femoral neck region of the composite femurs. Experimental testing was conducted using fourth-generation Sawbones® composite femurs instrumented with strain gauges at four locations. A generic FE model and four specimen-specific FE models were created from CT scans. The study found that the three solvers produced excellent agreement, with strain predictions being within an average of 3.0% for all the solvers (r2 > 0.99) and 1.4% for the two commercial codes. The average of the root mean squared error against the experimental results was 134.5% (r2 = 0.29) for the generic model and 13.8% (r2 = 0.96) for the specimen-specific models. It was found that composite femurs had variations in cortical thickness around the neck of the femur of up to 48.4%. For the first time, an experimentally validated, finite-element model of the femur is presented for use in three solvers. This model is freely available online along with all the supporting validation data.

  11. Capitalizing on Citizen Science Data for Validating Models and Generating Hypotheses Describing Meteorological Drivers of Mosquito-Borne Disease Risk

    NASA Astrophysics Data System (ADS)

    Boger, R. A.; Low, R.; Paull, S.; Anyamba, A.; Soebiyanto, R. P.

    2017-12-01

    Temperature and precipitation are important drivers of mosquito population dynamics, and a growing set of models have been proposed to characterize these relationships. Validation of these models, and development of broader theories across mosquito species and regions could nonetheless be improved by comparing observations from a global dataset of mosquito larvae with satellite-based measurements of meteorological variables. Citizen science data can be particularly useful for two such aspects of research into the meteorological drivers of mosquito populations: i) Broad-scale validation of mosquito distribution models and ii) Generation of quantitative hypotheses regarding changes to mosquito abundance and phenology across scales. The recently released GLOBE Observer Mosquito Habitat Mapper (GO-MHM) app engages citizen scientists in identifying vector taxa, mapping breeding sites and decommissioning non-natural habitats, and provides a potentially useful new tool for validating mosquito ubiquity projections based on the analysis of remotely sensed environmental data. Our early work with GO-MHM data focuses on two objectives: validating citizen science reports of Aedes aegypti distribution through comparison with accepted scientific data sources, and exploring the relationship between extreme temperature and precipitation events and subsequent observations of mosquito larvae. Ultimately the goal is to develop testable hypotheses regarding the shape and character of this relationship between mosquito species and regions.

  12. Validation in the clinical process: four settings for objectification of the subjectivity of understanding.

    PubMed

    Beland, H

    1994-12-01

    Clinical material is presented for discussion with the aim of exemplifying the author's conceptions of validation in a number of sessions and in psychoanalytic research and of making them verifiable, susceptible to consensus and/or falsifiable. Since Freud's postscript to the Dora case, the first clinical validation in the history of psychoanalysis, validation has been group-related and society-related, that is to say, it combines the evidence of subjectivity with the consensus of the research community (the scientific community). Validation verifies the conformity of the unconscious transference meaning with the analyst's understanding. The deciding criterion is the patient's reaction to the interpretation. In terms of the theory of science, validation in the clinical process corresponds to experimental testing of truth in the sphere of inanimate nature. Four settings of validation can be distinguished: the analyst's self-supervision during the process of understanding, which goes from incomprehension to comprehension (container-contained, PS-->D, selected fact); the patient's reaction to the interpretation (insight) and the analyst's assessment of the reaction; supervision and second thoughts; and discussion in groups and publications leading to consensus. It is a peculiarity of psychoanalytic research that in the event of positive validation the three criteria of truth (evidence, consensus and utility) coincide.

  13. Teaching the Assessment of Normality Using Large Easily-Generated Real Data Sets

    ERIC Educational Resources Information Center

    Kulp, Christopher W.; Sprechini, Gene D.

    2016-01-01

    A classroom activity is presented, which can be used in teaching students statistics with an easily generated, large, real world data set. The activity consists of analyzing a video recording of an object. The colour data of the recorded object can then be used as a data set to explore variation in the data using graphs including histograms,…

  14. Validation of the NIMH-ChEFS adolescent face stimulus set in an adolescent, parent, and health professional sample.

    PubMed

    Coffman, Marika C; Trubanova, Andrea; Richey, J Anthony; White, Susan W; Kim-Spoon, Jungmeen; Ollendick, Thomas H; Pine, Daniel S

    2015-12-01

    Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Content validity of the Comprehensive ICF Core Set for multiple sclerosis from the perspective of speech and language therapists.

    PubMed

    Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela

    2014-11-01

    The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning, Disability and Health (ICF) and to guide the treatment and rehabilitation process accordingly. It is now undergoing validation from the user perspective for which it has been developed in the first place. To validate the content of the Comprehensive ICF Core Set for MS from the perspective of speech and language therapists (SLTs) involved in the treatment of persons with MS (PwMS). Within a three-round e-mail-based Delphi Study 34 SLTs were asked about PwMS' problems, resources and aspects of the environment treated by SLTs. Responses were linked to ICF categories. Identified ICF categories were compared with those included in the Comprehensive ICF Core Set for MS to examine its content validity. Thirty-four SLTs named 524 problems and resources, as well as aspects of environment. Statements were linked to 129 ICF categories (60 Body-functions categories, two Body-structures categories, 42 Activities-&-participation categories, and 25 Environmental-factors categories). SLTs confirmed 46 categories in the Comprehensive ICF Core Set. Twenty-one ICF categories were identified as not-yet-included categories. This study contributes to the content validity of the Comprehensive ICF Core Set for MS from the perspective of SLTs. Study participants agreed on a few not-yet-included categories that should be further discussed for inclusion in a revised version of the Comprehensive ICF Core Set to strengthen SLTs' perspective in PwMS' neurorehabilitation. © 2014 Royal College of Speech and Language Therapists.

  16. Efficient simulation of voxelized phantom in GATE with embedded SimSET multiple photon history generator.

    PubMed

    Lin, Hsin-Hon; Chuang, Keh-Shih; Lin, Yi-Hsing; Ni, Yu-Ching; Wu, Jay; Jan, Meei-Ling

    2014-10-21

    GEANT4 Application for Tomographic Emission (GATE) is a powerful Monte Carlo simulator that combines the advantages of the general-purpose GEANT4 simulation code and the specific software tool implementations dedicated to emission tomography. However, the detailed physical modelling of GEANT4 is highly computationally demanding, especially when tracking particles through voxelized phantoms. To circumvent the relatively slow simulation of voxelized phantoms in GATE, another efficient Monte Carlo code can be used to simulate photon interactions and transport inside a voxelized phantom. The simulation system for emission tomography (SimSET), a dedicated Monte Carlo code for PET/SPECT systems, is well-known for its efficiency in simulation of voxel-based objects. An efficient Monte Carlo workflow integrating GATE and SimSET for simulating pinhole SPECT has been proposed to improve voxelized phantom simulation. Although the workflow achieves a desirable increase in speed, it sacrifices the ability to simulate decaying radioactive sources such as non-pure positron emitters or multiple emission isotopes with complex decay schemes and lacks the modelling of time-dependent processes due to the inherent limitations of the SimSET photon history generator (PHG). Moreover, a large volume of disk storage is needed to store the huge temporal photon history file produced by SimSET that must be transported to GATE. In this work, we developed a multiple photon emission history generator (MPHG) based on SimSET/PHG to support a majority of the medically important positron emitters. We incorporated the new generator codes inside GATE to improve the simulation efficiency of voxelized phantoms in GATE, while eliminating the need for the temporal photon history file. The validation of this new code based on a MicroPET R4 system was conducted for (124)I and (18)F with mouse-like and rat-like phantoms. Comparison of GATE/MPHG with GATE/GEANT4 indicated there is a slight difference in energy

  17. Validation of a dew-point generator for pressures up to 6 MPa using nitrogen and air

    NASA Astrophysics Data System (ADS)

    Bosma, R.; Mutter, D.; Peruzzi, A.

    2012-08-01

    A new primary humidity standard was developed at VSL that, in addition to ordinary operation with air and nitrogen at atmospheric pressure, can be operated with other carrier gases such as natural gas at pressures up to 6 MPa and SF6 at pressures up to 1 MPa. The temperature range of the standard is from -80 °C to +20 °C. In this paper, we report the validation of the new primary dew-point generator in the temperature range -41 °C to +5 °C and the pressure range 0.1 MPa to 6 MPa using nitrogen and air. For the validation the flow through the dew-point generator was varied up to 10 l min-1 (at 23 °C and 1013 hPa) and the dew point of the gas entering the generator was varied up to 15 °C above the dew point exiting the generator. The validation results showed that the new generator, over the tested temperature and pressure range, can be used with a standard uncertainty of 0.02 °C frost/dew point. The measurements used for the validation at -41 °C and -20 °C with nitrogen and at +5 °C with air were also used to calculate the enhancement factor at pressures up to 6 MPa. For +5 °C the differences between the measured and literature values were compatible with the respective uncertainties. For -41 °C and -20 °C they were compatible only up to 3 MPa. At 6 MPa a discrepancy was observed.

  18. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS).

    PubMed

    Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F

    2015-05-01

    Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.

  19. Validation of Metagenomic Next-Generation Sequencing Tests for Universal Pathogen Detection.

    PubMed

    Schlaberg, Robert; Chiu, Charles Y; Miller, Steve; Procop, Gary W; Weinstock, George

    2017-06-01

    - Metagenomic sequencing can be used for detection of any pathogens using unbiased, shotgun next-generation sequencing (NGS), without the need for sequence-specific amplification. Proof-of-concept has been demonstrated in infectious disease outbreaks of unknown causes and in patients with suspected infections but negative results for conventional tests. Metagenomic NGS tests hold great promise to improve infectious disease diagnostics, especially in immunocompromised and critically ill patients. - To discuss challenges and provide example solutions for validating metagenomic pathogen detection tests in clinical laboratories. A summary of current regulatory requirements, largely based on prior guidance for NGS testing in constitutional genetics and oncology, is provided. - Examples from 2 separate validation studies are provided for steps from assay design, and validation of wet bench and bioinformatics protocols, to quality control and assurance. - Although laboratory and data analysis workflows are still complex, metagenomic NGS tests for infectious diseases are increasingly being validated in clinical laboratories. Many parallels exist to NGS tests in other fields. Nevertheless, specimen preparation, rapidly evolving data analysis algorithms, and incomplete reference sequence databases are idiosyncratic to the field of microbiology and often overlooked.

  20. The Outcome and Assessment Information Set (OASIS): A Review of Validity and Reliability

    PubMed Central

    O’CONNOR, MELISSA; DAVITT, JOAN K.

    2015-01-01

    The Outcome and Assessment Information Set (OASIS) is the patient-specific, standardized assessment used in Medicare home health care to plan care, determine reimbursement, and measure quality. Since its inception in 1999, there has been debate over the reliability and validity of the OASIS as a research tool and outcome measure. A systematic literature review of English-language articles identified 12 studies published in the last 10 years examining the validity and reliability of the OASIS. Empirical findings indicate the validity and reliability of the OASIS range from low to moderate but vary depending on the item studied. Limitations in the existing research include: nonrepresentative samples; inconsistencies in methods used, items tested, measurement, and statistical procedures; and the changes to the OASIS itself over time. The inconsistencies suggest that these results are tentative at best; additional research is needed to confirm the value of the OASIS for measuring patient outcomes, research, and quality improvement. PMID:23216513

  1. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  2. Validation of Virtual Learning Team Competencies for Individual Students in a Distance Education Setting

    ERIC Educational Resources Information Center

    Topchyan, Ruzanna; Zhang, Jie

    2014-01-01

    The purpose of this study was twofold. First, the study aimed to validate the scale of the Virtual Team Competency Inventory in distance education, which had initially been designed for a corporate setting. Second, the methodological advantages of Exploratory Structural Equation Modeling (ESEM) framework over Confirmatory Factor Analysis (CFA)…

  3. Building and validating a prediction model for paediatric type 1 diabetes risk using next generation targeted sequencing of class II HLA genes.

    PubMed

    Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke

    2017-11-01

    It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.

  4. MODIS Land Data Products: Generation, Quality Assurance and Validation

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Morisette, Jeffery; Sinno, Scott; Teague, Michael; Saleous, Nazmi; Devadiga, Sadashiva; Justice, Christopher; Nickeson, Jaime

    2008-01-01

    The Moderate Resolution Imaging Spectrometer (MODIS) on-board NASA's Earth Observing System (EOS) Terra and Aqua Satellites are key instruments for providing data on global land, atmosphere, and ocean dynamics. Derived MODIS land, atmosphere and ocean products are central to NASA's mission to monitor and understand the Earth system. NASA has developed and generated on a systematic basis a suite of MODIS products starting with the first Terra MODIS data sensed February 22, 2000 and continuing with the first MODIS-Aqua data sensed July 2, 2002. The MODIS Land products are divided into three product suites: radiation budget products, ecosystem products, and land cover characterization products. The production and distribution of the MODIS Land products are described, from initial software delivery by the MODIS Land Science Team, to operational product generation and quality assurance, delivery to EOS archival and distribution centers, and product accuracy assessment and validation. Progress and lessons learned since the first MODIS data were in early 2000 are described.

  5. GeneratorSE: A Sizing Tool for Variable-Speed Wind Turbine Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Dykes, Katherine L

    This report documents a set of analytical models employed by the optimization algorithms within the GeneratorSE framework. The initial values and boundary conditions employed for the generation of the various designs and initial estimates for basic design dimensions, masses, and efficiency for the four different models of generators are presented and compared with empirical data collected from previous studies and some existing commercial turbines. These models include designs applicable for variable-speed, high-torque application featuring direct-drive synchronous generators and low-torque application featuring induction generators. In all of the four models presented, the main focus of optimization is electromagnetic design with themore » exception of permanent-magnet and wire-wound synchronous generators, wherein the structural design is also optimized. Thermal design is accommodated in GeneratorSE as a secondary attribute by limiting the winding current densities to acceptable limits. A preliminary validation of electromagnetic design was carried out by comparing the optimized magnetic loading against those predicted by numerical simulation in FEMM4.2, a finite-element software for analyzing electromagnetic and thermal physics problems for electrical machines. For direct-drive synchronous generators, the analytical models for the structural design are validated by static structural analysis in ANSYS.« less

  6. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    NASA Astrophysics Data System (ADS)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-01

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.

  8. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    NASA Astrophysics Data System (ADS)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  9. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Williams, McKay D.

    Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce

  10. Performance of genomic prediction within and across generations in maritime pine.

    PubMed

    Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent

    2016-08-11

    Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.

  11. The impact of crowd noise on officiating in muay thai: achieving external validity in an experimental setting.

    PubMed

    Myers, Tony; Balmer, Nigel

    2012-01-01

    Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the "crowd noise" intervention is allowed to vary), they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring "home" and "away" boxers. In each bout, judges were randomized into a "noise" (live sound) or "no crowd noise" (noise-canceling headphones and white noise) condition, resulting in 59 judgments in the "no crowd noise" and 61 in the "crowd noise" condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the "10-point must" scoring system shared with professional boxing). The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  12. The Impact of Crowd Noise on Officiating in Muay Thai: Achieving External Validity in an Experimental Setting

    PubMed Central

    Myers, Tony; Balmer, Nigel

    2012-01-01

    Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the “crowd noise” intervention is allowed to vary), they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring “home” and “away” boxers. In each bout, judges were randomized into a “noise” (live sound) or “no crowd noise” (noise-canceling headphones and white noise) condition, resulting in 59 judgments in the “no crowd noise” and 61 in the “crowd noise” condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the “10-point must” scoring system shared with professional boxing). The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed. PMID:23049520

  13. Setting and validating the pass/fail score for the NBDHE.

    PubMed

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  14. Optimizing the assessment of pediatric injury severity in low-resource settings: Consensus generation through a modified Delphi analysis.

    PubMed

    St-Louis, Etienne; Deckelbaum, Dan Leon; Baird, Robert; Razek, Tarek

    2017-06-01

    Although a plethora of pediatric injury severity scoring systems is available, many of them present important challenges and limitations in the low resource setting. Our aim is to generate consensus among a group of experts regarding the optimal parameters, outcomes, and methods of estimating injury severity for pediatric trauma patients in low resource settings. A systematic review of the literature was conducted to identify and compare existing injury scores used in pediatric patients. Qualitative data was extracted from the systematic review, including scoring parameters, settings and outcomes. In order to establish consensus regarding which of these elements are most adapted to pediatric patients in low-resource settings, they were subjected to a modified Delphi survey for external validation. The Delphi process is a structured communication technique that relies on a panel of experts to develop a systematic, interactive consensus method. We invited a group of 38 experts, including adult and pediatric surgeons, emergency physicians and anesthesiologists trauma team leaders from a level 1 trauma center in Montreal, Canada, and a pediatric referral trauma hospital in Santiago, Chile to participate in two successive rounds of our survey. Consensus was reached regarding various features of an ideal pediatric trauma score. Specifically, our experts agreed pediatric trauma scoring tool should differ from its adult counterpart, that it can be derived from point of care data available at first assessment, that blood pressure is an important variable to include in a predictive model for pediatric trauma outcomes, that blood pressure is a late but specific marker of shock in pediatric patients, that pulse rate is a more sensitive marker of hemodynamic instability than blood pressure, that an assessment of airway status should be included as a predictive variable for pediatric trauma outcomes, that the AVPU classification of neurologic status is simple and reliable in the

  15. Urine specimen validity test for drug abuse testing in workplace and court settings.

    PubMed

    Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun

    2018-01-01

    In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.

  16. Installation, Operation, and Operator's Maintenance of Diesel-Engine-Driven Generator Sets.

    ERIC Educational Resources Information Center

    Marine Corps Inst., Washington, DC.

    This student guide, one of a series of correspondence training courses designed to improve the job performance of members of the Marine Corps, contains three study units dealing with the skills needed by individuals responsible for the installation, operation, and maintenance of diesel engine-driven generator sets. The first two units cover…

  17. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  18. Finite Control Set Model Predictive Control for Multiple Distributed Generators Microgrids

    NASA Astrophysics Data System (ADS)

    Babqi, Abdulrahman Jamal

    This dissertation proposes two control strategies for AC microgrids that consist of multiple distributed generators (DGs). The control strategies are valid for both grid-connected and islanded modes of operation. In general, microgrid can operate as a stand-alone system (i.e., islanded mode) or while it is connected to the utility grid (i.e., grid connected mode). To enhance the performance of a micrgorid, a sophisticated control scheme should be employed. The control strategies of microgrids can be divided into primary and secondary controls. The primary control regulates the output active and reactive powers of each DG in grid-connected mode as well as the output voltage and frequency of each DG in islanded mode. The secondary control is responsible for regulating the microgrid voltage and frequency in the islanded mode. Moreover, it provides power sharing schemes among the DGs. In other words, the secondary control specifies the set points (i.e. reference values) for the primary controllers. In this dissertation, Finite Control Set Model Predictive Control (FCS-MPC) was proposed for controlling microgrids. FCS-MPC was used as the primary controller to regulate the output power of each DG (in the grid-connected mode) or the voltage of the point of DG coupling (in the islanded mode of operation). In the grid-connected mode, Direct Power Model Predictive Control (DPMPC) was implemented to manage the power flow between each DG and the utility grid. In the islanded mode, Voltage Model Predictive Control (VMPC), as the primary control, and droop control, as the secondary control, were employed to control the output voltage of each DG and system frequency. The controller was equipped with a supplementary current limiting technique in order to limit the output current of each DG in abnormal incidents. The control approach also enabled smooth transition between the two modes. The performance of the control strategy was investigated and verified using PSCAD/EMTDC software

  19. The Set of Fear Inducing Pictures (SFIP): Development and validation in fearful and nonfearful individuals.

    PubMed

    Michałowski, Jarosław M; Droździel, Dawid; Matuszewski, Jacek; Koziejowski, Wojtek; Jednoróg, Katarzyna; Marchewka, Artur

    2017-08-01

    Emotionally charged pictorial materials are frequently used in phobia research, but no existing standardized picture database is dedicated to the study of different phobias. The present work describes the results of two independent studies through which we sought to develop and validate this type of database-a Set of Fear Inducing Pictures (SFIP). In Study 1, 270 fear-relevant and 130 neutral stimuli were rated for fear, arousal, and valence by four groups of participants; small-animal (N = 34), blood/injection (N = 26), social-fearful (N = 35), and nonfearful participants (N = 22). The results from Study 1 were employed to develop the final version of the SFIP, which includes fear-relevant images of social exposure (N = 40), blood/injection (N = 80), spiders/bugs (N = 80), and angry faces (N = 30), as well as 726 neutral photographs. In Study 2, we aimed to validate the SFIP in a sample of spider, blood/injection, social-fearful, and control individuals (N = 66). The fear-relevant images were rated as being more unpleasant and led to greater fear and arousal in fearful than in nonfearful individuals. The fear images differentiated between the three fear groups in the expected directions. Overall, the present findings provide evidence for the high validity of the SFIP and confirm that the set may be successfully used in phobia research.

  20. Development and Validation of Targeted Next-Generation Sequencing Panels for Detection of Germline Variants in Inherited Diseases.

    PubMed

    Santani, Avni; Murrell, Jill; Funke, Birgit; Yu, Zhenming; Hegde, Madhuri; Mao, Rong; Ferreira-Gonzalez, Andrea; Voelkerding, Karl V; Weck, Karen E

    2017-06-01

    - The number of targeted next-generation sequencing (NGS) panels for genetic diseases offered by clinical laboratories is rapidly increasing. Before an NGS-based test is implemented in a clinical laboratory, appropriate validation studies are needed to determine the performance characteristics of the test. - To provide examples of assay design and validation of targeted NGS gene panels for the detection of germline variants associated with inherited disorders. - The approaches used by 2 clinical laboratories for the development and validation of targeted NGS gene panels are described. Important design and validation considerations are examined. - Clinical laboratories must validate performance specifications of each test prior to implementation. Test design specifications and validation data are provided, outlining important steps in validation of targeted NGS panels by clinical diagnostic laboratories.

  1. Evaluating Gene Set Enrichment Analysis Via a Hybrid Data Model

    PubMed Central

    Hua, Jianping; Bittner, Michael L.; Dougherty, Edward R.

    2014-01-01

    Gene set enrichment analysis (GSA) methods have been widely adopted by biological labs to analyze data and generate hypotheses for validation. Most of the existing comparison studies focus on whether the existing GSA methods can produce accurate P-values; however, practitioners are often more concerned with the correct gene-set ranking generated by the methods. The ranking performance is closely related to two critical goals associated with GSA methods: the ability to reveal biological themes and ensuring reproducibility, especially for small-sample studies. We have conducted a comprehensive simulation study focusing on the ranking performance of seven representative GSA methods. We overcome the limitation on the availability of real data sets by creating hybrid data models from existing large data sets. To build the data model, we pick a master gene from the data set to form the ground truth and artificially generate the phenotype labels. Multiple hybrid data models can be constructed from one data set and multiple data sets of smaller sizes can be generated by resampling the original data set. This approach enables us to generate a large batch of data sets to check the ranking performance of GSA methods. Our simulation study reveals that for the proposed data model, the Q2 type GSA methods have in general better performance than other GSA methods and the global test has the most robust results. The properties of a data set play a critical role in the performance. For the data sets with highly connected genes, all GSA methods suffer significantly in performance. PMID:24558298

  2. Spanish Translation and Cross-Language Validation of a Sleep Habits Questionnaire for Use in Clinical and Research Settings

    PubMed Central

    Baldwin, Carol M.; Choi, Myunghan; McClain, Darya Bonds; Celaya, Alma; Quan, Stuart F.

    2012-01-01

    Study Objectives: To translate, back-translate and cross-language validate (English/Spanish) the Sleep Heart Health Study Sleep Habits Questionnaire for use with Spanish-speakers in clinical and research settings. Methods: Following rigorous translation and back-translation, this cross-sectional cross-language validation study recruited bilingual participants from academic, clinic, and community-based settings (N = 50; 52% women; mean age 38.8 ± 12 years; 90% of Mexican heritage). Participants completed English and Spanish versions of the Sleep Habits Questionnaire, the Epworth Sleepiness Scale, and the Acculturation Rating Scale for Mexican Americans II one week apart in randomized order. Psychometric properties were assessed, including internal consistency, convergent validity, scale equivalence, language version intercorrelations, and exploratory factor analysis using PASW (Version18) software. Grade level readability of the sleep measure was evaluated. Results: All sleep categories (duration, snoring, apnea, insomnia symptoms, other sleep symptoms, sleep disruptors, restless legs syndrome) showed Cronbach α, Spearman-Brown coefficients and intercorrelations ≥ 0.700, suggesting robust internal consistency, correlation, and agreement between language versions. The Epworth correlated significantly with snoring, apnea, sleep symptoms, restless legs, and sleep disruptors) on both versions, supporting convergent validity. Items loaded on 4 factors accounted for 68% and 67% of the variance on the English and Spanish versions, respectively. Conclusions: The Spanish-language Sleep Habits Questionnaire demonstrates conceptual and content equivalency. It has appropriate measurement properties and should be useful for assessing sleep health in community-based clinics and intervention studies among Spanish-speaking Mexican Americans. Both language versions showed readability at the fifth grade level. Further testing is needed with larger samples. Citation: Baldwin CM

  3. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  4. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  5. Validation of Fall Risk Assessment Specific to the Inpatient Rehabilitation Facility Setting.

    PubMed

    Thomas, Dan; Pavic, Andrea; Bisaccia, Erin; Grotts, Jonathan

    2016-09-01

    To evaluate and compare the Morse Fall Scale (MFS) and the Casa Colina Fall Risk Assessment Scale (CCFRA) for identification of patients at risk for falling in an acute inpatient rehabilitation facility. The primary objective of this study was to perform a retrospective validation study of the CCFRAS, specifically for use in the inpatient rehabilitation facility (IRF) setting. Retrospective validation study. The study was approved under expedited review by the local Institutional Review Board. Data were collected on all patients admitted to Cottage Rehabiliation Hospital (CRH), a 38-bed acute inpatient rehabilitation hospital, from March 2012 to August 2013. Patients were excluded from the study if they had a length of stay less than 3 days or age less than 18. The area under the receiver operating characteristic curve (AUC) and the diagnostic odds ratio were used to examine the differences between the MFS and CCFRAS. AUC between fall scales was compared using the DeLong Test. There were 931 patients included in the study with 62 (6.7%) patient falls. The average age of the population was 68.8 with 503 males (51.2%). The AUC was 0.595 and 0.713 for the MFS and CCFRAS, respectively (0.006). The diagnostic odds ratio of the MFS was 2.0 and 3.6 for the CCFRAS using the recommended cutoffs of 45 for the MFS and 80 for the CCFRAS. The CCFRAS appears to be a better tool in detecting fallers vs. nonfallers specific to the IRF setting. The assessment and identification of patients at high risk for falling is important to implement specific precautions and care for these patients to reduce their risk of falling. The CCFRAS is more clinically relevant in identifying patients at high risk for falling in the IRF setting compared to other fall risk assessments. Implementation of this scale may lead to a reduction in fall rate and injuries from falls as it more appropriately identifies patients at high risk for falling. © 2015 Association of Rehabilitation Nurses.

  6. Empirical evaluation demonstrated importance of validating biomarkers for early detection of cancer in screening settings to limit the number of false-positive findings.

    PubMed

    Chen, Hongda; Knebel, Phillip; Brenner, Hermann

    2016-07-01

    Search for biomarkers for early detection of cancer is a very active area of research, but most studies are done in clinical rather than screening settings. We aimed to empirically evaluate the role of study setting for early detection marker identification and validation. A panel of 92 candidate cancer protein markers was measured in 35 clinically identified colorectal cancer patients and 35 colorectal cancer patients identified at screening colonoscopy. For each case group, we selected 38 controls without colorectal neoplasms at screening colonoscopy. Single-, two- and three-marker combinations discriminating cases and controls were identified in each setting and subsequently validated in the alternative setting. In all scenarios, a higher number of predictive biomarkers were initially detected in the clinical setting, but a substantially lower proportion of identified biomarkers could subsequently be confirmed in the screening setting. Confirmation rates were 50.0%, 84.5%, and 74.2% for one-, two-, and three-marker algorithms identified in the screening setting and were 42.9%, 18.6%, and 25.7% for algorithms identified in the clinical setting. Validation of early detection markers of cancer in a true screening setting is important to limit the number of false-positive findings. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Validity and validation of expert (Q)SAR systems.

    PubMed

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  8. Set-up and validation of a Delft-FEWS based coastal hazard forecasting system

    NASA Astrophysics Data System (ADS)

    Valchev, Nikolay; Eftimova, Petya; Andreeva, Nataliya

    2017-04-01

    European coasts are increasingly threatened by hazards related to low-probability and high-impact hydro-meteorological events. Uncertainties in hazard prediction and capabilities to cope with their impact lie in both future storm pattern and increasing coastal development. Therefore, adaptation to future conditions requires a re-evaluation of coastal disaster risk reduction (DRR) strategies and introduction of a more efficient mix of prevention, mitigation and preparedness measures. The latter presumes that development of tools, which can manage the complex process of merging data and models and generate products on the current and expected hydro-and morpho-dynamic states of the coasts, such as forecasting system of flooding and erosion hazards at vulnerable coastal locations (hotspots), is of vital importance. Output of such system can be of an utmost value for coastal stakeholders and the entire coastal community. In response to these challenges, Delft-FEWS provides a state-of-the-art framework for implementation of such system with vast capabilities to trigger the early warning process. In addition, this framework is highly customizable to the specific requirements of any individual coastal hotspot. Since its release many Delft-FEWS based forecasting system related to inland flooding have been developed. However, limited number of coastal applications was implemented. In this paper, a set-up of Delft-FEWS based forecasting system for Varna Bay (Bulgaria) and a coastal hotspot, which includes a sandy beach and port infrastructure, is presented. It is implemented in the frame of RISC-KIT project (Resilience-Increasing Strategies for Coasts - toolKIT). The system output generated in hindcast mode is validated with available observations of surge levels, wave and morphodynamic parameters for a sequence of three short-duration and relatively weak storm events occurred during February 4-12, 2015. Generally, the models' performance is considered as very good and

  9. New generation pharmacogenomic tools: a SNP linkage disequilibrium Map, validated SNP assay resource, and high-throughput instrumentation system for large-scale genetic studies.

    PubMed

    De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A

    2002-06-01

    Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.

  10. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  11. Development and Validation of Decision Forest Model for Estrogen Receptor Binding Prediction of Chemicals Using Large Data Sets.

    PubMed

    Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2015-12-21

    Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.

  12. The FORBIO Climate data set for climate analyses

    NASA Astrophysics Data System (ADS)

    Delvaux, C.; Journée, M.; Bertrand, C.

    2015-06-01

    In the framework of the interdisciplinary FORBIO Climate research project, the Royal Meteorological Institute of Belgium is in charge of providing high resolution gridded past climate data (i.e. temperature and precipitation). This climate data set will be linked to the measurements on seedlings, saplings and mature trees to assess the effects of climate variation on tree performance. This paper explains how the gridded daily temperature (minimum and maximum) data set was generated from a consistent station network between 1980 and 2013. After station selection, data quality control procedures were developed and applied to the station records to ensure that only valid measurements will be involved in the gridding process. Thereafter, the set of unevenly distributed validated temperature data was interpolated on a 4 km × 4 km regular grid over Belgium. The performance of different interpolation methods has been assessed. The method of kriging with external drift using correlation between temperature and altitude gave the most relevant results.

  13. Reliability and Validity of Instruments for Assessing Perinatal Depression in African Settings: Systematic Review and Meta-Analysis

    PubMed Central

    Tsai, Alexander C.; Scott, Jennifer A.; Hung, Kristin J.; Zhu, Jennifer Q.; Matthews, Lynn T.; Psaros, Christina; Tomlinson, Mark

    2013-01-01

    Background A major barrier to improving perinatal mental health in Africa is the lack of locally validated tools for identifying probable cases of perinatal depression or for measuring changes in depression symptom severity. We systematically reviewed the evidence on the reliability and validity of instruments to assess perinatal depression in African settings. Methods and Findings Of 1,027 records identified through searching 7 electronic databases, we reviewed 126 full-text reports. We included 25 unique studies, which were disseminated in 26 journal articles and 1 doctoral dissertation. These enrolled 12,544 women living in nine different North and sub-Saharan African countries. Only three studies (12%) used instruments developed specifically for use in a given cultural setting. Most studies provided evidence of criterion-related validity (20 [80%]) or reliability (15 [60%]), while fewer studies provided evidence of construct validity, content validity, or internal structure. The Edinburgh postnatal depression scale (EPDS), assessed in 16 studies (64%), was the most frequently used instrument in our sample. Ten studies estimated the internal consistency of the EPDS (median estimated coefficient alpha, 0.84; interquartile range, 0.71-0.87). For the 14 studies that estimated sensitivity and specificity for the EPDS, we constructed 2 x 2 tables for each cut-off score. Using a bivariate random-effects model, we estimated a pooled sensitivity of 0.94 (95% confidence interval [CI], 0.68-0.99) and a pooled specificity of 0.77 (95% CI, 0.59-0.88) at a cut-off score of ≥9, with higher cut-off scores yielding greater specificity at the cost of lower sensitivity. Conclusions The EPDS can reliably and validly measure perinatal depression symptom severity or screen for probable postnatal depression in African countries, but more validation studies on other instruments are needed. In addition, more qualitative research is needed to adequately characterize local

  14. Validity Issues in Standard-Setting Studies

    ERIC Educational Resources Information Center

    Pant, Hans A.; Rupp, Andre A.; Tiffin-Richards, Simon P.; Koller, Olaf

    2009-01-01

    Standard-setting procedures are a key component within many large-scale educational assessment systems. They are consensual approaches in which committees of experts set cut-scores on continuous proficiency scales, which facilitate communication of proficiency distributions of students to a wide variety of stakeholders. This communicative function…

  15. SPAGETTA, a Gridded Weather Generator: Calibration, Validation and its Use for Future Climate

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Rotach, Mathias W.; Huth, Radan

    2017-04-01

    Spagetta is a new (started in 2016) stochastic multi-site multi-variate weather generator (WG). It can produce realistic synthetic daily (or monthly, or annual) weather series representing both present and future climate conditions at multiple sites (grids or stations irregularly distributed in space). The generator, whose model is based on the Wilks' (1999) multi-site extension of the parametric (Richardson's type) single site M&Rfi generator, may be run in two modes: In the first mode, it is run as a classical generator, which is calibrated in the first step using weather data from multiple sites, and only then it may produce arbitrarily long synthetic time series mimicking the spatial and temporal structure of the calibration weather data. To generate the weather series representing the future climate, the WG parameters are modified according to the climate change scenario, typically derived from GCM or RCM simulations. In the second mode, the user provides only basic information (not necessarily to be realistic) on the temporal and spatial auto-correlation structure of the surface weather variables and their mean annual cycle; the generator itself derives the parameters of the underlying autoregressive model, which produces the multi-site weather series. In the latter mode of operation, the user is allowed to prescribe the spatially varying trend, which is superimposed to the values produced by the generator; this feature has been implemented for use in developing the methodology for assessing significance of trends in multi-site weather series (for more details see another EGU-2017 contribution: Huth and Dubrovsky, 2017, Evaluating collective significance of climatic trends: A comparison of methods on synthetic data; EGU2017-4993). This contribution will focus on the first (classical) mode. The poster will present (a) model of the generator, (b) results of the validation tests made in terms of the spatial hot/cold/dry/wet spells, and (c) results of the pilot

  16. Adjusting Limit Setting in Play Therapy with First-Generation Mexican-American Children

    ERIC Educational Resources Information Center

    Perez, Roxanna; Ramirez, Sylvia Z.; Kranz, Peter L.

    2007-01-01

    This paper focuses on limit setting in play therapy with first-generation Mexican-American children in two important therapeutic environments that include the traditional indoor playroom and a proposed outdoor play area. The paper is based on a review of the literature and the authors' clinical experiences with this population. They concluded…

  17. Rapid high-silica magma generation in basalt-dominated rift settings

    NASA Astrophysics Data System (ADS)

    Berg, Sylvia E.; Troll, Valentin R.; Burchardt, Steffi; Deegan, Frances M.; Riishuus, Morten S.; Whitehouse, Martin J.; Harris, Chris; Freda, Carmela; Ellis, Ben S.; Krumbholz, Michael; Gústafsson, Ludvik E.

    2015-04-01

    The processes that drive large-scale silicic magmatism in basalt-dominated provinces have been widely debated for decades, with Iceland being at the centre of this discussion [1-5]. Iceland hosts large accumulations of silicic rocks in a largely basaltic oceanic setting that is considered by some workers to resemble the situation documented for the Hadean [6-7]. We have investigated the time scales and processes of silicic volcanism in the largest complete pulse of Neogene rift-related silicic magmatism preserved in Iceland (>450 km3), which is a potential analogue of initial continent nucleation in early Earth. Borgarfjörður Eystri in NE-Iceland hosts silicic rocks in excess of 20 vol.%, which exceeds the ≤12 vol% usual for Iceland [3,8]. New SIMS zircon ages document that the dominantly explosive silicic pulse was generated within a ≤2 Myr window (13.5 ± 0.2 to 12.2 ± 03 Ma), and sub-mantle zircon δ18O values (1.2 to 4.5 ± 0.2‰, n=337) indicate ≤33% assimilation of low-δ18O hydrothermally-altered crust (δ18O=0‰), with intense crustal melting at 12.5 Ma, followed by rapid termination of silicic magma production once crustal fertility declined [9]. This silicic outburst was likely caused by extensive rift flank volcanism due to a rift relocation and a flare of the Iceland plume [4,10] that triggered large-scale crustal melting and generated mixed-origin silicic melts. High-silica melt production from a basaltic parent was replicated in a set of new partial melting experiments of regional hydrated basalts, conducted at 800-900°C and 150 MPa, that produced silicic melt pockets up to 77 wt.% SiO2. Moreover, Ti-in-zircon thermometry from Borgarfjörður Eystri give a zircon crystallisation temperature ~713°C (Ti range from 2.4 to 22.1 ppm, average=7.7 ppm, n=142), which is lower than recorded elsewhere in Iceland [11], but closely overlaps with the zircon crystallisation temperatures documented for Hadean zircon populations [11-13], hinting at

  18. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data.

    PubMed

    Hettne, Kristina M; Boorsma, André; van Dartel, Dorien A M; Goeman, Jelle J; de Jong, Esther; Piersma, Aldert H; Stierum, Rob H; Kleinjans, Jos C; Kors, Jan A

    2013-01-29

    Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set analysis (GSA) methods for chemical treatment identification, for pharmacological mechanism elucidation, and for comparing compound toxicity profiles. We created 30,211 chemical response-specific gene sets for human and mouse by next-gen TM, and derived 1,189 (human) and 588 (mouse) gene sets from the Comparative Toxicogenomics Database (CTD). We tested for significant differential expression (SDE) (false discovery rate -corrected p-values < 0.05) of the next-gen TM-derived gene sets and the CTD-derived gene sets in gene expression (GE) data sets of five chemicals (from experimental models). We tested for SDE of gene sets for six fibrates in a peroxisome proliferator-activated receptor alpha (PPARA) knock-out GE dataset and compared to results from the Connectivity Map. We tested for SDE of 319 next-gen TM-derived gene sets for environmental toxicants in three GE data sets of triazoles, and tested for SDE of 442 gene sets associated with embryonic structures. We compared the gene sets to triazole effects seen in the Whole Embryo Culture (WEC), and used principal component analysis (PCA) to discriminate triazoles from other chemicals. Next-gen TM-derived gene sets matching the chemical treatment were significantly altered in three GE data sets, and the corresponding CTD-derived gene sets were significantly altered in five GE data sets. Six next-gen TM-derived and four CTD-derived fibrate gene sets were significantly altered in the PPARA knock-out GE dataset. None of the fibrate signatures in cMap scored significant against the PPARA GE signature. 33 environmental toxicant gene sets were significantly altered in the triazole GE data sets. 21 of these toxicants had a similar toxicity

  19. Validation of Social Cognition Rating Tools in Indian Setting (SOCRATIS): A new test-battery to assess social cognition.

    PubMed

    Mehta, Urvakhsh M; Thirthalli, Jagadisha; Naveen Kumar, C; Mahadevaiah, Mahesh; Rao, Kiran; Subbakrishna, Doddaballapura K; Gangadhar, Bangalore N; Keshavan, Matcheri S

    2011-09-01

    Social cognition is a cognitive domain that is under substantial cultural influence. There are no culturally appropriate standardized tools in India to comprehensively test social cognition. This study describes validation of tools for three social cognition constructs: theory of mind, social perception and attributional bias. Theory of mind tests included adaptations of, (a) two first order tasks [Sally-Anne and Smarties task], (b) two second order tasks [Ice cream van and Missing cookies story], (c) two metaphor-irony tasks and (d) the faux pas recognition test. Internal, Personal, and Situational Attributions Questionnaire (IPSAQ) and Social Cue Recognition Test were adapted to assess attributional bias and social perception, respectively. These tests were first modified to suit the Indian cultural context without changing the constructs to be tested. A panel of experts then rated the tests on likert scales as to (1) whether the modified tasks tested the same construct as in the original and (2) whether they were culturally appropriate. The modified tests were then administered to groups of actively symptomatic and remitted schizophrenia patients as well as healthy comparison subjects. All tests of the Social Cognition Rating Tools in Indian Setting had good content validity and known groups validity. In addition, the social cure recognition test in Indian setting had good internal consistency and concurrent validity. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Predicting death from kala-azar: construction, development, and validation of a score set and accompanying software.

    PubMed

    Costa, Dorcas Lamounier; Rocha, Regina Lunardi; Chaves, Eldo de Brito Ferreira; Batista, Vivianny Gonçalves de Vasconcelos; Costa, Henrique Lamounier; Costa, Carlos Henrique Nery

    2016-01-01

    Early identification of patients at higher risk of progressing to severe disease and death is crucial for implementing therapeutic and preventive measures; this could reduce the morbidity and mortality from kala-azar. We describe a score set composed of four scales in addition to software for quick assessment of the probability of death from kala-azar at the point of care. Data from 883 patients diagnosed between September 2005 and August 2008 were used to derive the score set, and data from 1,031 patients diagnosed between September 2008 and November 2013 were used to validate the models. Stepwise logistic regression analyses were used to derive the optimal multivariate prediction models. Model performance was assessed by its discriminatory accuracy. A computational specialist system (Kala-Cal(r)) was developed to speed up the calculation of the probability of death based on clinical scores. The clinical prediction score showed high discrimination (area under the curve [AUC] 0.90) for distinguishing death from survival for children ≤2 years old. Performance improved after adding laboratory variables (AUC 0.93). The clinical score showed equivalent discrimination (AUC 0.89) for older children and adults, which also improved after including laboratory data (AUC 0.92). The score set also showed a high, although lower, discrimination when applied to the validation cohort. This score set and Kala-Cal(r) software may help identify individuals with the greatest probability of death. The associated software may speed up the calculation of the probability of death based on clinical scores and assist physicians in decision-making.

  1. Validity in work-based assessment: expanding our horizons.

    PubMed

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these

  2. Evaluating the validity of multiple imputation for missing physiological data in the national trauma data bank.

    PubMed

    Moore, Lynne; Hanley, James A; Lavoie, André; Turgeon, Alexis

    2009-05-01

    The National Trauma Data Bank (NTDB) is plagued by the problem of missing physiological data. The Glasgow Coma Scale score, Respiratory Rate and Systolic Blood Pressure are an essential part of risk adjustment strategies for trauma system evaluation and clinical research. Missing data on these variables may compromise the feasibility and the validity of trauma group comparisons. To evaluate the validity of Multiple Imputation (MI) for completing missing physiological data in the National Trauma Data Bank (NTDB), by assessing the impact of MI on 1) frequency distributions, 2) associations with mortality, and 3) risk adjustment. Analyses were based on 170,956 NTDB observations with complete physiological data (observed data set). Missing physiological data were artificially imposed on this data set and then imputed using MI (MI data set). To assess the impact of MI on risk adjustment, 100 pairs of hospitals were randomly selected with replacement and compared using adjusted Odds Ratios (OR) of mortality. OR generated by the observed data set were then compared to those generated by the MI data set. Frequency distributions and associations with mortality were preserved following MI. The median absolute difference between adjusted OR of mortality generated by the observed data set and by the MI data set was 3.6% (inter-quartile range: 2.4%-6.1%). This study suggests that, provided it is implemented with care, MI of missing physiological data in the NTDB leads to valid frequency distributions, preserves associations with mortality, and does not compromise risk adjustment in inter-hospital comparisons of mortality.

  3. Development of a tool to measure person-centered maternity care in developing settings: validation in a rural and urban Kenyan population.

    PubMed

    Afulani, Patience A; Diamond-Smith, Nadia; Golub, Ginger; Sudhinaraset, May

    2017-09-22

    Person-centered reproductive health care is recognized as critical to improving reproductive health outcomes. Yet, little research exists on how to operationalize it. We extend the literature in this area by developing and validating a tool to measure person-centered maternity care. We describe the process of developing the tool and present the results of psychometric analyses to assess its validity and reliability in a rural and urban setting in Kenya. We followed standard procedures for scale development. First, we reviewed the literature to define our construct and identify domains, and developed items to measure each domain. Next, we conducted expert reviews to assess content validity; and cognitive interviews with potential respondents to assess clarity, appropriateness, and relevance of the questions. The questions were then refined and administered in surveys; and survey results used to assess construct and criterion validity and reliability. The exploratory factor analysis yielded one dominant factor in both the rural and urban settings. Three factors with eigenvalues greater than one were identified for the rural sample and four factors identified for the urban sample. Thirty of the 38 items administered in the survey were retained based on the factors loadings and correlation between the items. Twenty-five items load very well onto a single factor in both the rural and urban sample, with five items loading well in either the rural or urban sample, but not in both samples. These 30 items also load on three sub-scales that we created to measure dignified and respectful care, communication and autonomy, and supportive care. The Chronbach alpha for the main scale is greater than 0.8 in both samples, and that for the sub-scales are between 0.6 and 0.8. The main scale and sub-scales are correlated with global measures of satisfaction with maternity services, suggesting criterion validity. We present a 30-item scale with three sub-scales to measure person

  4. BESST (Bochum Emotional Stimulus Set)--a pilot validation study of a stimulus set containing emotional bodies and faces from frontal and averted views.

    PubMed

    Thoma, Patrizia; Soria Bauser, Denise; Suchan, Boris

    2013-08-30

    This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule

    PubMed Central

    Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav

    2007-01-01

    The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.

  6. Validation of the Comprehensive ICF Core Set for obstructive pulmonary diseases from the patient's perspective.

    PubMed

    Marques, Alda; Jácome, Cristina; Gonçalves, Ana; Silva, Sara; Lucas, Carla; Cruz, Joana; Gabriel, Raquel

    2014-06-01

    This study aimed to validate the Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for obstructive pulmonary diseases (OPDs) from the perspective of patients with chronic obstructive pulmonary disease. A cross-sectional qualitative study was carried out with outpatients with chronic obstructive pulmonary disease using focus groups with an ICF-based approach. Qualitative data were analysed using the meaning condensation procedure by two researchers with expertise in the ICF. Thirty-two participants (37.5% women; 63.8 ± 11.3 years old) were included in six focus groups. A total of 61 (86%) ICF categories of the Comprehensive ICF Core Set for OPD were confirmed. Thirty-nine additional second-level categories not included in the Core Set were identified: 15 from the body functions component, four from the body structures, nine from the activities and participation and 11 from the environmental factors. The majority of the categories included in the Comprehensive ICF Core Set for OPD were confirmed from the patients' perspective. However, additional categories, not included in the Core Set, were also reported. The categories included in the Core Set were not confirmed and the additional categories need to be investigated further to develop an instrument tailored to patients' needs. This will promote patient-centred assessments and rehabilitation interventions.

  7. Generation and validation of PAX7 reporter lines from human iPS cells using CRISPR/Cas9 technology.

    PubMed

    Wu, Jianbo; Hunt, Samuel D; Xue, Haipeng; Liu, Ying; Darabi, Radbod

    2016-03-01

    Directed differentiation of iPS cells toward various tissue progenitors has been the focus of recent research. Therefore, generation of tissue-specific reporter iPS cell lines provides better understanding of developmental stages in iPS cells. This technical report describes an efficient strategy for generation and validation of knock-in reporter lines in human iPS cells using the Cas9-nickase system. Here, we have generated a knock-in human iPS cell line for the early myogenic lineage specification gene of PAX7. By introduction of site-specific double-stranded breaks (DSB) in the genomic locus of PAX7 using CRISPR/Cas9 nickase pairs, a 2A-GFP reporter with selection markers has been incorporated before the stop codon of the PAX7 gene at the last exon. After positive and negative selection, single cell-derived human iPS clones have been isolated and sequenced for in-frame positioning of the reporter construct. Finally, by using a nuclease-dead Cas9 activator (dCas9-VP160) system, the promoter region of PAX7 has been targeted for transient gene induction to validate the GFP reporter activity. This was confirmed by flow cytometry analysis and immunostaining for PAX7 and GFP. This technical report provides a practical guideline for generation and validation of knock-in reporters using CRISPR/Cas9 system. Published by Elsevier B.V.

  8. Development and validation of an early childhood development scale for use in low-resourced settings.

    PubMed

    McCoy, Dana Charles; Sudfeld, Christopher R; Bellinger, David C; Muhihi, Alfa; Ashery, Geofrey; Weary, Taylor E; Fawzi, Wafaie; Fink, Günther

    2017-02-09

    Low-cost, cross-culturally comparable measures of the motor, cognitive, and socioemotional skills of children under 3 years remain scarce. In the present paper, we aim to develop a new caregiver-reported early childhood development (ECD) scale designed to be implemented as part of household surveys in low-resourced settings. We evaluate the acceptability, test-retest reliability, internal consistency, and discriminant validity of the new ECD items, subscales, and full scale in a sample of 2481 18- to 36-month-old children from peri-urban and rural Tanzania. We also compare total and subscale scores with performance on the Bayley Scales of Infant Development (BSID-III) in a subsample of 1036 children. Qualitative interviews from 10 mothers and 10 field workers are used to inform quantitative data. Adequate levels of acceptability and internal consistency were found for the new scale and its motor, cognitive, and socioemotional subscales. Correlations between the new scale and the BSID-III were high (r > .50) for the motor and cognitive subscales, but low (r < .20) for the socioemotional subscale. The new scale discriminated between children's skills based on age, stunting status, caregiver-reported disability, and adult stimulation. Test-retest reliability scores were variable among a subset of items tested. Results of this study provide empirical support from a low-income country setting for the acceptability, reliability, and validity of a new caregiver-reported ECD scale. Additional research is needed to test these and other caregiver reported items in children in the full 0 to 3 year range across multiple cultural and linguistic settings.

  9. Solar power generation system for reducing leakage current

    NASA Astrophysics Data System (ADS)

    Wu, Jinn-Chang; Jou, Hurng-Liahng; Hung, Chih-Yi

    2018-04-01

    This paper proposes a transformer-less multi-level solar power generation system. This solar power generation system is composed of a solar cell array, a boost power converter, an isolation switch set and a full-bridge inverter. A unipolar pulse-width modulation (PWM) strategy is used in the full-bridge inverter to attenuate the output ripple current. Circuit isolation is accomplished by integrating the isolation switch set between the solar cell array and the utility, to suppress the leakage current. The isolation switch set also determines the DC bus voltage for the full-bridge inverter connecting to the solar cell array or the output of the boost power converter. Accordingly, the proposed transformer-less multi-level solar power generation system generates a five-level voltage, and the partial power of the solar cell array is also converted to AC power using only the full-bridge inverter, so the power efficiency is increased. A prototype is developed to validate the performance of the proposed transformer-less multi-level solar power generation system.

  10. Generating Vegetation Leaf Area Index Earth System Data Record from Multiple Sensors. Part 2; Implementation, Analysis and Validation

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Samanta, Arindam; Schull, Mitchell A.; Shabanov, Nikolay V.; Milesi, Cristina; Nemani, Ramajrushna R,; Knyazikhin, Yuri; Myneni, Ranga B.

    2008-01-01

    The evaluation of a new global monthly leaf area index (LAI) data set for the period July 1981 to December 2006 derived from AVHRR Normalized Difference Vegetation Index (NDVI) data is described. The physically based algorithm is detailed in the first of the two part series. Here, the implementation, production and evaluation of the data set are described. The data set is evaluated both by direct comparisons to ground data and indirectly through inter-comparisons with similar data sets. This indirect validation showed satisfactory agreement with existing LAI products, importantly MODIS, at a range of spatial scales, and significant correlations with key climate variables in areas where temperature and precipitation limit plant growth. The data set successfully reproduced well-documented spatio-temporal trends and inter-annual variations in vegetation activity in the northern latitudes and semi-arid tropics. Comparison with plot scale field measurements over homogeneous vegetation patches indicated a 7% underestimation when all major vegetation types are taken into account. The error in mean values obtained from distributions of AVHRR LAI and high-resolution field LAI maps for different biomes is within 0.5 LAI for six out of the ten selected sites. These validation exercises though limited by the amount of field data, and thus less than comprehensive, indicated satisfactory agreement between the LAI product and field measurements. Overall, the intercomparison with short-term LAI data sets, evaluation of long term trends with known variations in climate variables, and validation with field measurements together build confidence in the utility of this new 26 year LAI record for long term vegetation monitoring and modeling studies.

  11. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data

    PubMed Central

    2013-01-01

    Background Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set analysis (GSA) methods for chemical treatment identification, for pharmacological mechanism elucidation, and for comparing compound toxicity profiles. Methods We created 30,211 chemical response-specific gene sets for human and mouse by next-gen TM, and derived 1,189 (human) and 588 (mouse) gene sets from the Comparative Toxicogenomics Database (CTD). We tested for significant differential expression (SDE) (false discovery rate -corrected p-values < 0.05) of the next-gen TM-derived gene sets and the CTD-derived gene sets in gene expression (GE) data sets of five chemicals (from experimental models). We tested for SDE of gene sets for six fibrates in a peroxisome proliferator-activated receptor alpha (PPARA) knock-out GE dataset and compared to results from the Connectivity Map. We tested for SDE of 319 next-gen TM-derived gene sets for environmental toxicants in three GE data sets of triazoles, and tested for SDE of 442 gene sets associated with embryonic structures. We compared the gene sets to triazole effects seen in the Whole Embryo Culture (WEC), and used principal component analysis (PCA) to discriminate triazoles from other chemicals. Results Next-gen TM-derived gene sets matching the chemical treatment were significantly altered in three GE data sets, and the corresponding CTD-derived gene sets were significantly altered in five GE data sets. Six next-gen TM-derived and four CTD-derived fibrate gene sets were significantly altered in the PPARA knock-out GE dataset. None of the fibrate signatures in cMap scored significant against the PPARA GE signature. 33 environmental toxicant gene sets were significantly altered in the triazole GE data sets. 21 of these toxicants

  12. Student generated learning objectives: extent of congruence with faculty set objectives and factors influencing their generation.

    PubMed

    Abdul Ghaffar Al-Shaibani, Tarik A; Sachs-Robertson, Annette; Al Shazali, Hafiz O; Sequeira, Reginald P; Hamdy, Hosam; Al-Roomi, Khaldoon

    2003-07-01

    A problem-based learning strategy is used for curriculum planning and implementation at the Arabian Gulf University, Bahrain. Problems are constructed in a way that faculty-set objectives are expected to be identified by students during tutorials. Students in small groups, along with a tutor functioning as a facilitator, identify learning issues and define their learning objectives. We compared objectives identified by student groups with faculty-set objectives to determine extent of congruence, and identified factors that influenced students' ability at identifying faculty-set objectives. Male and female students were segregated and randomly grouped. A faculty tutor was allocated for each group. This study was based on 13 problems given to entry-level medical students. Pooled objectives of these problems were classified into four categories: structural, functional, clinical and psychosocial. Univariate analysis of variance was used for comparison, and a p > 0.05 was considered significant. The mean of overall objectives generated by the students was 54.2%, for each problem. Students identified psychosocial learning objectives more readily than structural ones. Female students identified more psychosocial objectives, whereas male students identified more of structural objectives. Tutor characteristics such as medical/non-medical background, and the years of teaching were correlated with categories of learning issues identified. Students identify part of the faculty-set learning objectives during tutorials with a faculty tutor acting as a facilitator. Students' gender influences types of learning issues identified. Content expertise of tutors does not influence identification of learning needs by students.

  13. Relationship between Leadership Style, Leadership Outcomes, Gender, and Generation of Administrators in the Illinois Public School Setting

    ERIC Educational Resources Information Center

    Ryder, Denise R.

    2016-01-01

    As the Millennial generation (individuals born 1982-2004) of leaders emerge in the educational setting, it is important to consider how these individuals may lead or work alongside a more veteran generation of teachers and administrators. While most of the evidence on leadership focuses on the characteristics of Baby Boomer and Generation X…

  14. Development and construct validation of the Client-Centredness of Goal Setting (C-COGS) scale.

    PubMed

    Doig, Emmah; Prescott, Sarah; Fleming, Jennifer; Cornwell, Petrea; Kuipers, Pim

    2015-07-01

    Client-centred philosophy is integral to occupational therapy practice and client-centred goal planning is considered fundamental to rehabilitation. Evaluation of whether goal-planning practices are client-centred requires an understanding of the client's perspective about goal-planning processes and practices. The Client-Centredness of Goal Setting (C-COGS) was developed for use by practitioners who seek to be more client-centred and who require a scale to guide and evaluate individually orientated practice, especially with adults with cognitive impairment related to acquired brain injury. To describe development of the C-COGS scale and examine its construct validity. The C-COGS was administered to 42 participants with acquired brain injury after multidisciplinary goal planning. C-COGS scores were correlated with the Canadian Occupational Performance Measure (COPM) importance scores, and measures of therapeutic alliance, motivation, and global functioning to establish construct validity. The C-COGS scale has three subscales evaluating goal alignment, goal planning participation, and client-centredness of goals. The C-COGS subscale items demonstrated moderately significant correlations with scales measuring similar constructs. Findings provide preliminary evidence to support the construct validity of the C-COGS scale, which is intended to be used to evaluate and reflect on client-centred goal planning in clinical practice, and to highlight factors contributing to best practice rehabilitation.

  15. The Validity of a New Structured Assessment of Gastrointestinal Symptoms Scale (SAGIS) for Evaluating Symptoms in the Clinical Setting.

    PubMed

    Koloski, N A; Jones, M; Hammer, J; von Wulffen, M; Shah, A; Hoelz, H; Kutyla, M; Burger, D; Martin, N; Gurusamy, S R; Talley, N J; Holtmann, G

    2017-08-01

    The clinical assessments of patients with gastrointestinal symptoms can be time-consuming, and the symptoms captured during the consultation may be influenced by a variety of patient and non-patient factors. To facilitate standardized symptom assessment in the routine clinical setting, we developed the Structured Assessment of Gastrointestinal Symptom (SAGIS) instrument to precisely characterize symptoms in a routine clinical setting. We aimed to validate SAGIS including its reliability, construct and discriminant validity, and utility in the clinical setting. Development of the SAGIS consisted of initial interviews with patients referred for the diagnostic work-up of digestive symptoms and relevant complaints identified. The final instrument consisted of 22 items as well as questions on extra intestinal symptoms and was given to 1120 consecutive patients attending a gastroenterology clinic randomly split into derivation (n = 596) and validation datasets (n = 551). Discriminant validity along with test-retest reliability was assessed. The time taken to perform a clinical assessment with and without the SAGIS was recorded along with doctor satisfaction with this tool. Exploratory factor analysis conducted on the derivation sample suggested five symptom constructs labeled as abdominal pain/discomfort (seven items), gastroesophageal reflux disease/regurgitation symptoms (four items), nausea/vomiting (three items), diarrhea/incontinence (five items), and difficult defecation and constipation (2 items). Confirmatory factor analysis conducted on the validation sample supported the initially developed five-factor measurement model ([Formula: see text], p < 0.0001, χ 2 /df = 4.6, CFI = 0.90, TLI = 0.88, RMSEA = 0.08). All symptom groups demonstrated differentiation between disease groups. The SAGIS was shown to be reliable over time and resulted in a 38% reduction of the time required for clinical assessment. The SAGIS instrument has excellent

  16. Prediction of Metastasis Using Second Harmonic Generation

    DTIC Science & Technology

    2016-07-01

    extracellular matrix through which metastasizing cells must travel. We and others have demonstrated that tumor collagen structure, as measured with the...algorithm using separate training and validation sets, etc. Keywords: metastasis, overtreatment, extracellular matrix , collagen , second harmonic...optical process called second harmonic generation (SHG), influences tumor metastasis. This suggests that collagen structure may provide prognostic

  17. Computer-generated reminders and quality of pediatric HIV care in a resource-limited setting.

    PubMed

    Were, Martin C; Nyandiko, Winstone M; Huang, Kristin T L; Slaven, James E; Shen, Changyu; Tierney, William M; Vreeman, Rachel C

    2013-03-01

    To evaluate the impact of clinician-targeted computer-generated reminders on compliance with HIV care guidelines in a resource-limited setting. We conducted this randomized, controlled trial in an HIV referral clinic in Kenya caring for HIV-infected and HIV-exposed children (<14 years of age). For children randomly assigned to the intervention group, printed patient summaries containing computer-generated patient-specific reminders for overdue care recommendations were provided to the clinician at the time of the child's clinic visit. For children in the control group, clinicians received the summaries, but no computer-generated reminders. We compared differences between the intervention and control groups in completion of overdue tasks, including HIV testing, laboratory monitoring, initiating antiretroviral therapy, and making referrals. During the 5-month study period, 1611 patients (49% female, 70% HIV-infected) were eligible to receive at least 1 computer-generated reminder (ie, had an overdue clinical task). We observed a fourfold increase in the completion of overdue clinical tasks when reminders were availed to providers over the course of the study (68% intervention vs 18% control, P < .001). Orders also occurred earlier for the intervention group (77 days, SD 2.4 days) compared with the control group (104 days, SD 1.2 days) (P < .001). Response rates to reminders varied significantly by type of reminder and between clinicians. Clinician-targeted, computer-generated clinical reminders are associated with a significant increase in completion of overdue clinical tasks for HIV-infected and exposed children in a resource-limited setting.

  18. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    PubMed

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most

  19. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    PubMed

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  20. A Supervised Learning Process to Validate Online Disease Reports for Use in Predictive Models.

    PubMed

    Patching, Helena M M; Hudson, Laurence M; Cooke, Warrick; Garcia, Andres J; Hay, Simon I; Roberts, Mark; Moyes, Catherine L

    2015-12-01

    Pathogen distribution models that predict spatial variation in disease occurrence require data from a large number of geographic locations to generate disease risk maps. Traditionally, this process has used data from public health reporting systems; however, using online reports of new infections could speed up the process dramatically. Data from both public health systems and online sources must be validated before they can be used, but no mechanisms exist to validate data from online media reports. We have developed a supervised learning process to validate geolocated disease outbreak data in a timely manner. The process uses three input features, the data source and two metrics derived from the location of each disease occurrence. The location of disease occurrence provides information on the probability of disease occurrence at that location based on environmental and socioeconomic factors and the distance within or outside the current known disease extent. The process also uses validation scores, generated by disease experts who review a subset of the data, to build a training data set. The aim of the supervised learning process is to generate validation scores that can be used as weights going into the pathogen distribution model. After analyzing the three input features and testing the performance of alternative processes, we selected a cascade of ensembles comprising logistic regressors. Parameter values for the training data subset size, number of predictors, and number of layers in the cascade were tested before the process was deployed. The final configuration was tested using data for two contrasting diseases (dengue and cholera), and 66%-79% of data points were assigned a validation score. The remaining data points are scored by the experts, and the results inform the training data set for the next set of predictors, as well as going to the pathogen distribution model. The new supervised learning process has been implemented within our live site and is

  1. European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip.

    PubMed

    Weigl, Martin; Wild, Heike

    2017-09-15

    To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with

  2. Spanish translation and cross-language validation of a sleep habits questionnaire for use in clinical and research settings.

    PubMed

    Baldwin, Carol M; Choi, Myunghan; McClain, Darya Bonds; Celaya, Alma; Quan, Stuart F

    2012-04-15

    To translate, back-translate and cross-language validate (English/Spanish) the Sleep Heart Health Study Sleep Habits Questionnaire for use with Spanish-speakers in clinical and research settings. Following rigorous translation and back-translation, this cross-sectional cross-language validation study recruited bilingual participants from academic, clinic, and community-based settings (N = 50; 52% women; mean age 38.8 ± 12 years; 90% of Mexican heritage). Participants completed English and Spanish versions of the Sleep Habits Questionnaire, the Epworth Sleepiness Scale, and the Acculturation Rating Scale for Mexican Americans II one week apart in randomized order. Psychometric properties were assessed, including internal consistency, convergent validity, scale equivalence, language version intercorrelations, and exploratory factor analysis using PASW (Version18) software. Grade level readability of the sleep measure was evaluated. All sleep categories (duration, snoring, apnea, insomnia symptoms, other sleep symptoms, sleep disruptors, restless legs syndrome) showed Cronbach α, Spearman-Brown coefficients and intercorrelations ≥ 0.700, suggesting robust internal consistency, correlation, and agreement between language versions. The Epworth correlated significantly with snoring, apnea, sleep symptoms, restless legs, and sleep disruptors) on both versions, supporting convergent validity. Items loaded on 4 factors accounted for 68% and 67% of the variance on the English and Spanish versions, respectively. The Spanish-language Sleep Habits Questionnaire demonstrates conceptual and content equivalency. It has appropriate measurement properties and should be useful for assessing sleep health in community-based clinics and intervention studies among Spanish-speaking Mexican Americans. Both language versions showed readability at the fifth grade level. Further testing is needed with larger samples.

  3. Validation of a next-generation sequencing assay for clinical molecular oncology.

    PubMed

    Cottrell, Catherine E; Al-Kateb, Hussam; Bredemeyer, Andrew J; Duncavage, Eric J; Spencer, David H; Abel, Haley J; Lockwood, Christina M; Hagemann, Ian S; O'Guin, Stephanie M; Burcea, Lauren C; Sawyer, Christopher S; Oschwald, Dayna M; Stratman, Jennifer L; Sher, Dorie A; Johnson, Mark R; Brown, Justin T; Cliften, Paul F; George, Bijoy; McIntosh, Leslie D; Shrivastava, Savita; Nguyen, Tudung T; Payton, Jacqueline E; Watson, Mark A; Crosby, Seth D; Head, Richard D; Mitra, Robi D; Nagarajan, Rakesh; Kulkarni, Shashikant; Seibert, Karen; Virgin, Herbert W; Milbrandt, Jeffrey; Pfeifer, John D

    2014-01-01

    Currently, oncology testing includes molecular studies and cytogenetic analysis to detect genetic aberrations of clinical significance. Next-generation sequencing (NGS) allows rapid analysis of multiple genes for clinically actionable somatic variants. The WUCaMP assay uses targeted capture for NGS analysis of 25 cancer-associated genes to detect mutations at actionable loci. We present clinical validation of the assay and a detailed framework for design and validation of similar clinical assays. Deep sequencing of 78 tumor specimens (≥ 1000× average unique coverage across the capture region) achieved high sensitivity for detecting somatic variants at low allele fraction (AF). Validation revealed sensitivities and specificities of 100% for detection of single-nucleotide variants (SNVs) within coding regions, compared with SNP array sequence data (95% CI = 83.4-100.0 for sensitivity and 94.2-100.0 for specificity) or whole-genome sequencing (95% CI = 89.1-100.0 for sensitivity and 99.9-100.0 for specificity) of HapMap samples. Sensitivity for detecting variants at an observed 10% AF was 100% (95% CI = 93.2-100.0) in HapMap mixes. Analysis of 15 masked specimens harboring clinically reported variants yielded concordant calls for 13/13 variants at AF of ≥ 15%. The WUCaMP assay is a robust and sensitive method to detect somatic variants of clinical significance in molecular oncology laboratories, with reduced time and cost of genetic analysis allowing for strategic patient management. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  4. A critical remark on the applicability of E-OBS European gridded temperature data set for validating control climate simulations

    NASA Astrophysics Data System (ADS)

    Kyselý, Jan; Plavcová, Eva

    2010-12-01

    The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.

  5. A Spatio-Temporal Approach for Global Validation and Analysis of MODIS Aerosol Products

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Chu, D. Allen; Mattoo, Shana; Kaufman, Yoram J.; Remer, Lorraine A.; Tanre, Didier; Slutsker, Ilya; Holben, Brent N.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    With the launch of the MODIS sensor on the Terra spacecraft, new data sets of the global distribution and properties of aerosol are being retrieved, and need to be validated and analyzed. A system has been put in place to generate spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of the MODIS aerosol parameters over more than 100 validation sites spread around the globe. Corresponding statistics are also computed from temporal subsets of AERONET-derived aerosol data. The means and standard deviations of identical parameters from MOMS and AERONET are compared. Although, their means compare favorably, their standard deviations reveal some influence of surface effects on the MODIS aerosol retrievals over land, especially at low aerosol loading. The direction and rate of spatial variation from MODIS are used to study the spatial distribution of aerosols at various locations either individually or comparatively. This paper introduces the methodology for generating and analyzing the data sets used by the two MODIS aerosol validation papers in this issue.

  6. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    PubMed

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  7. ALHAT System Validation

    NASA Technical Reports Server (NTRS)

    Brady, Tye; Bailey, Erik; Crain, Timothy; Paschall, Stephen

    2011-01-01

    NASA has embarked on a multiyear technology development effort to develop a safe and precise lunar landing capability. The Autonomous Landing and Hazard Avoidance Technology (ALHAT) Project is investigating a range of landing hazard detection methods while developing a hazard avoidance capability to best field test the proper set of relevant autonomous GNC technologies. Ultimately, the advancement of these technologies through the ALHAT Project will provide an ALHAT System capable of enabling next generation lunar lander vehicles to globally land precisely and safely regardless of lighting condition. This paper provides an overview of the ALHAT System and describes recent validation experiments that have advanced the highly capable GNC architecture.

  8. Reliability and validity of a novel tool to comprehensively assess food and beverage marketing in recreational sport settings.

    PubMed

    Prowse, Rachel J L; Naylor, Patti-Jean; Olstad, Dana Lee; Carson, Valerie; Mâsse, Louise C; Storey, Kate; Kirk, Sara F L; Raine, Kim D

    2018-05-31

    Current methods for evaluating food marketing to children often study a single marketing channel or approach. As the World Health Organization urges the removal of unhealthy food marketing in children's settings, methods that comprehensively explore the exposure and power of food marketing within a setting from multiple marketing channels and approaches are needed. The purpose of this study was to test the inter-rater reliability and the validity of a novel settings-based food marketing audit tool. The Food and beverage Marketing Assessment Tool for Settings (FoodMATS) was developed and its psychometric properties evaluated in five public recreation and sport facilities (sites) and subsequently used in 51 sites across Canada for a cross-sectional analysis of food marketing. Raters recorded the count of food marketing occasions, presence of child-targeted and sports-related marketing techniques, and the physical size of marketing occasions. Marketing occasions were classified by healthfulness. Inter-rater reliability was tested using Cohen's kappa (κ) and intra-class correlations (ICC). FoodMATS scores for each site were calculated using an algorithm that represented the theoretical impact of the marketing environment on food preferences, purchases, and consumption. Higher FoodMATS scores represented sites with higher exposure to, and more powerful (unhealthy, child-targeted, sports-related, large) food marketing. Validity of the scoring algorithm was tested through (1) Pearson's correlations between FoodMATS scores and facility sponsorship dollars, and (2) sequential multiple regression for predicting "Least Healthy" food sales from FoodMATS scores. Inter-rater reliability was very good to excellent (κ = 0.88-1.00, p < 0.001; ICC = 0.97, p < 0.001). There was a strong positive correlation between FoodMATS scores and food sponsorship dollars, after controlling for facility size (r = 0.86, p < 0.001). The FoodMATS score explained 14% of the

  9. Clinical Validation of Copy Number Variant Detection from Targeted Next-Generation Sequencing Panels.

    PubMed

    Kerkhof, Jennifer; Schenkel, Laila C; Reilly, Jack; McRobbie, Sheri; Aref-Eshghi, Erfan; Stuart, Alan; Rupar, C Anthony; Adams, Paul; Hegele, Robert A; Lin, Hanxin; Rodenhiser, David; Knoll, Joan; Ainsworth, Peter J; Sadikovic, Bekim

    2017-11-01

    Next-generation sequencing (NGS) technology has rapidly replaced Sanger sequencing in the assessment of sequence variations in clinical genetics laboratories. One major limitation of current NGS approaches is the ability to detect copy number variations (CNVs) approximately >50 bp. Because these represent a major mutational burden in many genetic disorders, parallel CNV assessment using alternate supplemental methods, along with the NGS analysis, is normally required, resulting in increased labor, costs, and turnaround times. The objective of this study was to clinically validate a novel CNV detection algorithm using targeted clinical NGS gene panel data. We have applied this approach in a retrospective cohort of 391 samples and a prospective cohort of 2375 samples and found a 100% sensitivity (95% CI, 89%-100%) for 37 unique events and a high degree of specificity to detect CNVs across nine distinct targeted NGS gene panels. This NGS CNV pipeline enables stand-alone first-tier assessment for CNV and sequence variants in a clinical laboratory setting, dispensing with the need for parallel CNV analysis using classic techniques, such as microarray, long-range PCR, or multiplex ligation-dependent probe amplification. This NGS CNV pipeline can also be applied to the assessment of complex genomic regions, including pseudogenic DNA sequences, such as the PMS2CL gene, and to mitochondrial genome heteroplasmy detection. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  10. Online frequency estimation with applications to engine and generator sets

    NASA Astrophysics Data System (ADS)

    Manngård, Mikael; Böling, Jari M.

    2017-07-01

    Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.

  11. Establishing the Reliability and Validity of a Computerized Assessment of Children's Working Memory for Use in Group Settings

    ERIC Educational Resources Information Center

    St Clair-Thompson, Helen

    2014-01-01

    The aim of the present study was to investigate the reliability and validity of a brief standardized assessment of children's working memory; "Lucid Recall." Although there are many established assessments of working memory, "Lucid Recall" is fully automated and can therefore be administered in a group setting. It is therefore…

  12. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review

    DOT National Transportation Integrated Search

    2018-01-11

    Background: This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. Methods: A systematic review study design wa...

  13. Design and content validation of a set of SMS to promote seeking of specialized mental health care within the Allillanchu Project.

    PubMed

    Toyama, M; Diez-Canseco, F; Busse, P; Del Mastro, I; Miranda, J J

    2018-01-01

    The aim of this study was to design and develop a set of, short message service (SMS) to promote specialized mental health care seeking within the framework of the Allillanchu Project. The design phase consisted of 39 interviews with potential recipients of the SMS, about use of cellphones, and perceptions and motivations towards seeking mental health care. After the data collection, the research team developed a set of seven SMS for validation. The content validation phase consisted of 24 interviews. The participants answered questions regarding their understanding of the SMS contents and rated its appeal. The seven SMS subjected to content validation were tailored to the recipient using their name. The reminder message included the working hours of the psychology service at the patient's health center. The motivational messages addressed perceived barriers and benefits when seeking mental health services. The average appeal score of the seven SMS was 9.0 (SD±0.4) of 10 points. Participants did not make significant suggestions to change the wording of the messages. Five SMS were chosen to be used. This approach is likely to be applicable to other similar low-resource settings, and the methodology used can be adapted to develop SMS for other chronic conditions.

  14. SiBIC: a web server for generating gene set networks based on biclusters obtained by maximal frequent itemset mining.

    PubMed

    Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi

    2013-01-01

    Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp.

  15. Validation of the Comprehensive ICF Core Set for Vocational Rehabilitation From the Perspective of Physical Therapists: International Delphi Survey.

    PubMed

    Kaech Moll, Veronika M; Escorpizo, Reuben; Portmann Bergamaschi, Ruth; Finger, Monika E

    2016-08-01

    The Comprehensive ICF Core Set for vocational rehabilitation (VR) is a list of essential categories on functioning based on the World Health Organization (WHO) International Classification of Functioning, Disability and Health (ICF), which describes a standard for interdisciplinary assessment, documentation, and communication in VR. The aim of this study was to examine the content validity of the Comprehensive ICF Core Set for VR from the perspective of physical therapists. A 3-round email survey was performed using the Delphi method. A convenience sample of international physical therapists working in VR with work experience of ≥2 years were asked to identify aspects they consider as relevant when evaluating or treating clients in VR. Responses were linked to the ICF categories and compared with the Comprehensive ICF Core Set for VR. Sixty-two physical therapists from all 6 WHO world regions responded with 3,917 statements that were subsequently linked to 338 ICF categories. Fifteen (17%) of the 90 categories in the Comprehensive ICF Core Set for VR were confirmed by the physical therapists in the sample. Twenty-two additional ICF categories were identified that were not included in the Comprehensive ICF Core Set for VR. Vocational rehabilitation in physical therapy is not well defined in every country and might have resulted in the small sample size. Therefore, the results cannot be generalized to all physical therapists practicing in VR. The content validity of the ICF Core Set for VR is insufficient from solely a physical therapist perspective. The results of this study could be used to define a physical therapy-specific set of ICF categories to develop and guide physical therapist clinical practice in VR. © 2016 American Physical Therapy Association.

  16. Improved Diagnostic Accuracy of Alzheimer's Disease by Combining Regional Cortical Thickness and Default Mode Network Functional Connectivity: Validated in the Alzheimer's Disease Neuroimaging Initiative Set.

    PubMed

    Park, Ji Eun; Park, Bumwoo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal ( p < 0.001) and supramarginal gyrus ( p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.

  17. Improved Diagnostic Accuracy of Alzheimer's Disease by Combining Regional Cortical Thickness and Default Mode Network Functional Connectivity: Validated in the Alzheimer's Disease Neuroimaging Initiative Set

    PubMed Central

    Park, Ji Eun; Park, Bumwoo; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun

    2017-01-01

    Objective To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Materials and Methods Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Results Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Conclusion Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease. PMID:29089831

  18. Statistically Validated Networks in Bipartite Complex Systems

    PubMed Central

    Tumminello, Michele; Miccichè, Salvatore; Lillo, Fabrizio; Piilo, Jyrki; Mantegna, Rosario N.

    2011-01-01

    Many complex systems present an intrinsic bipartite structure where elements of one set link to elements of the second set. In these complex systems, such as the system of actors and movies, elements of one set are qualitatively different than elements of the other set. The properties of these complex systems are typically investigated by constructing and analyzing a projected network on one of the two sets (for example the actor network or the movie network). Complex systems are often very heterogeneous in the number of relationships that the elements of one set establish with the elements of the other set, and this heterogeneity makes it very difficult to discriminate links of the projected network that are just reflecting system's heterogeneity from links relevant to unveil the properties of the system. Here we introduce an unsupervised method to statistically validate each link of a projected network against a null hypothesis that takes into account system heterogeneity. We apply the method to a biological, an economic and a social complex system. The method we propose is able to detect network structures which are very informative about the organization and specialization of the investigated systems, and identifies those relationships between elements of the projected network that cannot be explained simply by system heterogeneity. We also show that our method applies to bipartite systems in which different relationships might have different qualitative nature, generating statistically validated networks in which such difference is preserved. PMID:21483858

  19. Validity and reliability of a simple, low cost measure to quantify children’s dietary intake in afterschool settings

    PubMed Central

    Davison, Kirsten K.; Austin, S. Bryn; Giles, Catherine; Cradock, Angie L.; Lee, Rebekka M.; Gortmaker, Steven L.

    2017-01-01

    Interest in evaluating and improving children’s diets in afterschool settings has grown, necessitating the development of feasible yet valid measures for capturing children’s intake in such settings. This study’s purpose was to test the criterion validity and cost of three unobtrusive visual estimation methods compared to a plate-weighing method: direct on-site observation using a 4-category rating scale and off-site rating of digital photographs taken on-site using 4- and 10-category scales. Participants were 111 children in grades 1–6 attending four afterschool programs in Boston, MA in December 2011. Researchers observed and photographed 174 total snack meals consumed across two days at each program. Visual estimates of consumption were compared to weighed estimates (the criterion measure) using intra-class correlations. All three methods were highly correlated with the criterion measure, ranging from 0.92–0.94 for total calories consumed, 0.86–0.94 for consumption of pre-packaged beverages, 0.90–0.93 for consumption of fruits/vegetables, and 0.92–0.96 for consumption of grains. For water, which was not pre-portioned, coefficients ranged from 0.47–0.52. The photographic methods also demonstrated excellent inter-rater reliability: 0.84–0.92 for the 4-point and 0.92–0.95 for the 10-point scale. The costs of the methods for estimating intake ranged from $0.62 per observation for the on-site direct visual method to $0.95 per observation for the criterion measure. This study demonstrates that feasible, inexpensive methods can validly and reliably measure children’s dietary intake in afterschool settings. Improving precision in measures of children’s dietary intake can reduce the likelihood of spurious or null findings in future studies. PMID:25596895

  20. Combining multiple positive training sets to generate confidence scores for protein-protein interactions.

    PubMed

    Yu, Jingkai; Finley, Russell L

    2009-01-01

    High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.

  1. Gravity Waves Generated by Convection: A New Idealized Model Tool and Direct Validation with Satellite Observations

    NASA Astrophysics Data System (ADS)

    Alexander, M. Joan; Stephan, Claudia

    2015-04-01

    In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional

  2. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    PubMed

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  3. Computational model for calculating the dynamical behaviour of generators caused by unbalanced magnetic pull and experimental validation

    NASA Astrophysics Data System (ADS)

    Pennacchi, Paolo

    2008-04-01

    The modelling of the unbalanced magnetic pull (UMP) in generators and the experimental validation of the proposed method are presented in this paper. The UMP is one of the most remarkable effects of electromechanical interactions in rotating machinery. As a consequence of the rotor eccentricity, the imbalance of the electromagnetic forces acting between rotor and stator generates a net radial force. This phenomenon can be avoided by means of a careful assembly and manufacture in small and stiff machines, like electrical motors. On the contrary, the eccentricity of the active part of the rotor with respect to the stator is unavoidable in big generators of power plants, because they operate above their first critical speed and are supported by oil-film bearings. In the first part of the paper, a method aimed to calculate the UMP force is described. This model is more general than those available in literature, which are limited to circular orbits. The model is based on the actual position of the rotor inside the stator, therefore on the actual air-gap distribution, regardless of the orbit type. The closed form of the nonlinear UMP force components is presented. In the second part, the experimental validation of the proposed model is presented. The dynamical behaviour in the time domain of a steam turbo-generator of a power plant is considered and it is shown that the model is able to reproduce the dynamical effects due to the excitation of the magnetic field in the generator.

  4. Clinical Validation of Targeted Next Generation Sequencing for Colon and Lung Cancers

    PubMed Central

    D’Haene, Nicky; Le Mercier, Marie; De Nève, Nancy; Blanchard, Oriane; Delaunoy, Mélanie; El Housni, Hakim; Dessars, Barbara; Heimann, Pierre; Remmelink, Myriam; Demetter, Pieter; Tejpar, Sabine; Salmon, Isabelle

    2015-01-01

    Objective Recently, Next Generation Sequencing (NGS) has begun to supplant other technologies for gene mutation testing that is now required for targeted therapies. However, transfer of NGS technology to clinical daily practice requires validation. Methods We validated the Ion Torrent AmpliSeq Colon and Lung cancer panel interrogating 1850 hotspots in 22 genes using the Ion Torrent Personal Genome Machine. First, we used commercial reference standards that carry mutations at defined allelic frequency (AF). Then, 51 colorectal adenocarcinomas (CRC) and 39 non small cell lung carcinomas (NSCLC) were retrospectively analyzed. Results Sensitivity and accuracy for detecting variants at an AF >4% was 100% for commercial reference standards. Among the 90 cases, 89 (98.9%) were successfully sequenced. Among the 86 samples for which NGS and the reference test were both informative, 83 showed concordant results between NGS and the reference test; i.e. KRAS and BRAF for CRC and EGFR for NSCLC, with the 3 discordant cases each characterized by an AF <10%. Conclusions Overall, the AmpliSeq colon/lung cancer panel was specific and sensitive for mutation analysis of gene panels and can be incorporated into clinical daily practice. PMID:26366557

  5. Studying primate cognition in a social setting to improve validity and welfare: a literature review highlighting successful approaches.

    PubMed

    Cronin, Katherine A; Jacobson, Sarah L; Bonnie, Kristin E; Hopper, Lydia M

    2017-01-01

    Studying animal cognition in a social setting is associated with practical and statistical challenges. However, conducting cognitive research without disturbing species-typical social groups can increase ecological validity, minimize distress, and improve animal welfare. Here, we review the existing literature on cognitive research run with primates in a social setting in order to determine how widespread such testing is and highlight approaches that may guide future research planning. Using Google Scholar to search the terms "primate" "cognition" "experiment" and "social group," we conducted a systematic literature search covering 16 years (2000-2015 inclusive). We then conducted two supplemental searches within each journal that contained a publication meeting our criteria in the original search, using the terms "primate" and "playback" in one search and the terms "primate" "cognition" and "social group" in the second. The results were used to assess how frequently nonhuman primate cognition has been studied in a social setting (>3 individuals), to gain perspective on the species and topics that have been studied, and to extract successful approaches for social testing. Our search revealed 248 unique publications in 43 journals encompassing 71 species. The absolute number of publications has increased over years, suggesting viable strategies for studying cognition in social settings. While a wide range of species were studied they were not equally represented, with 19% of the publications reporting data for chimpanzees. Field sites were the most common environment for experiments run in social groups of primates, accounting for more than half of the results. Approaches to mitigating the practical and statistical challenges were identified. This analysis has revealed that the study of primate cognition in a social setting is increasing and taking place across a range of environments. This literature review calls attention to examples that may provide valuable

  6. Studying primate cognition in a social setting to improve validity and welfare: a literature review highlighting successful approaches

    PubMed Central

    Jacobson, Sarah L.; Bonnie, Kristin E.; Hopper, Lydia M.

    2017-01-01

    Background Studying animal cognition in a social setting is associated with practical and statistical challenges. However, conducting cognitive research without disturbing species-typical social groups can increase ecological validity, minimize distress, and improve animal welfare. Here, we review the existing literature on cognitive research run with primates in a social setting in order to determine how widespread such testing is and highlight approaches that may guide future research planning. Survey Methodology Using Google Scholar to search the terms “primate” “cognition” “experiment” and “social group,” we conducted a systematic literature search covering 16 years (2000–2015 inclusive). We then conducted two supplemental searches within each journal that contained a publication meeting our criteria in the original search, using the terms “primate” and “playback” in one search and the terms “primate” “cognition” and “social group” in the second. The results were used to assess how frequently nonhuman primate cognition has been studied in a social setting (>3 individuals), to gain perspective on the species and topics that have been studied, and to extract successful approaches for social testing. Results Our search revealed 248 unique publications in 43 journals encompassing 71 species. The absolute number of publications has increased over years, suggesting viable strategies for studying cognition in social settings. While a wide range of species were studied they were not equally represented, with 19% of the publications reporting data for chimpanzees. Field sites were the most common environment for experiments run in social groups of primates, accounting for more than half of the results. Approaches to mitigating the practical and statistical challenges were identified. Discussion This analysis has revealed that the study of primate cognition in a social setting is increasing and taking place across a range of

  7. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    PubMed Central

    2012-01-01

    Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above) were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of providing reasonable cause

  8. Evaluation of Measurement Instrument Criterion Validity in Finite Mixture Settings

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Li, Tenglong

    2016-01-01

    A method for evaluating the validity of multicomponent measurement instruments in heterogeneous populations is discussed. The procedure can be used for point and interval estimation of criterion validity of linear composites in populations representing mixtures of an unknown number of latent classes. The approach permits also the evaluation of…

  9. Validation of the second-generation Olympus colonoscopy simulator for skills assessment.

    PubMed

    Haycock, A V; Bassett, P; Bladen, J; Thomas-Gibson, S

    2009-11-01

    Simulators have potential value in providing objective evidence of technical skill for procedures within medicine. The aim of this study was to determine face and construct validity for the Olympus colonoscopy simulator and to establish which assessment measures map to clinical benchmarks of expertise. Thirty-four participants were recruited: 10 novices with no prior colonoscopy experience, 13 intermediate (trainee) endoscopists with fewer than 1000 previous colonoscopies, and 11 experienced endoscopists with more than 1000 previous colonoscopies. All participants completed three standardized cases on the simulator and experts gave feedback regarding the realism of the simulator. Forty metrics recorded automatically by the simulator were analyzed for their ability to distinguish between the groups. The simulator discriminated participants by experience level for 22 different parameters. Completion rates were lower for novices than for trainees and experts (37 % vs. 79 % and 88 % respectively, P < 0.001) and both novices and trainees took significantly longer to reach all major landmarks than the experts. Several technical aspects of competency were discriminatory; pushing with an embedded tip ( P = 0.03), correct use of the variable stiffness function ( P = 0.004), number of sigmoid N-loops ( P = 0.02); size of sigmoid N-loops ( P = 0.01), and time to remove alpha loops ( P = 0.004). Out of 10, experts rated the realism of movement at 6.4, force feedback at 6.6, looping at 6.6, and loop resolution at 6.8. The Olympus colonoscopy simulator has good face validity and excellent construct validity. It provides an objective assessment of colonoscopic skill on multiple measures and benchmarks have been set to allow its use as both a formative and a summative assessment tool. Georg Thieme Verlag KG Stuttgart. New York.

  10. Developmental validation of the MiSeq FGx Forensic Genomics System for Targeted Next Generation Sequencing in Forensic DNA Casework and Database Laboratories.

    PubMed

    Jäger, Anne C; Alvarez, Michelle L; Davis, Carey P; Guzmán, Ernesto; Han, Yonmee; Way, Lisa; Walichiewicz, Paulina; Silva, David; Pham, Nguyen; Caves, Glorianna; Bruand, Jocelyne; Schlesinger, Felix; Pond, Stephanie J K; Varlaro, Joe; Stephens, Kathryn M; Holt, Cydne L

    2017-05-01

    Human DNA profiling using PCR at polymorphic short tandem repeat (STR) loci followed by capillary electrophoresis (CE) size separation and length-based allele typing has been the standard in the forensic community for over 20 years. Over the last decade, Next-Generation Sequencing (NGS) matured rapidly, bringing modern advantages to forensic DNA analysis. The MiSeq FGx™ Forensic Genomics System, comprised of the ForenSeq™ DNA Signature Prep Kit, MiSeq FGx™ Reagent Kit, MiSeq FGx™ instrument and ForenSeq™ Universal Analysis Software, uses PCR to simultaneously amplify up to 231 forensic loci in a single multiplex reaction. Targeted loci include Amelogenin, 27 common, forensic autosomal STRs, 24 Y-STRs, 7 X-STRs and three classes of single nucleotide polymorphisms (SNPs). The ForenSeq™ kit includes two primer sets: Amelogenin, 58 STRs and 94 identity informative SNPs (iiSNPs) are amplified using DNA Primer Set A (DPMA; 153 loci); if a laboratory chooses to generate investigative leads using DNA Primer Set B, amplification is targeted to the 153 loci in DPMA plus 22 phenotypic informative (piSNPs) and 56 biogeographical ancestry SNPs (aiSNPs). High-resolution genotypes, including detection of intra-STR sequence variants, are semi-automatically generated with the ForenSeq™ software. This system was subjected to developmental validation studies according to the 2012 Revised SWGDAM Validation Guidelines. A two-step PCR first amplifies the target forensic STR and SNP loci (PCR1); unique, sample-specific indexed adapters or "barcodes" are attached in PCR2. Approximately 1736 ForenSeq™ reactions were analyzed. Studies include DNA substrate testing (cotton swabs, FTA cards, filter paper), species studies from a range of nonhuman organisms, DNA input sensitivity studies from 1ng down to 7.8pg, two-person human DNA mixture testing with three genotype combinations, stability analysis of partially degraded DNA, and effects of five commonly encountered PCR

  11. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  12. Development and validation of a socioculturally competent trust in physician scale for a developing country setting

    PubMed Central

    Gopichandran, Vijayaprasad; Wouters, Edwin; Chetlapalli, Satish Kumar

    2015-01-01

    Trust in physicians is the unwritten covenant between the patient and the physician that the physician will do what is in the best interest of the patient. This forms the undercurrent of all healthcare relationships. Several scales exist for assessment of trust in physicians in developed healthcare settings, but to our knowledge none of these have been developed in a developing country context. Objectives To develop and validate a new trust in physician scale for a developing country setting. Methods Dimensions of trust in physicians, which were identified in a previous qualitative study in the same setting, were used to develop a scale. This scale was administered among 616 adults selected from urban and rural areas of Tamil Nadu, south India, using a multistage sampling cross sectional survey method. The individual items were analysed using a classical test approach as well as item response theory. Cronbach's α was calculated and the item to total correlation of each item was assessed. After testing for unidimensionality and absence of local dependence, a 2 parameter logistic Semajima's graded response model was fit and item characteristics assessed. Results Competence, assurance of treatment, respect for the physician and loyalty to the physician were important dimensions of trust. A total of 31 items were developed using these dimensions. Of these, 22 were selected for final analysis. The Cronbach's α was 0.928. The item to total correlations were acceptable for all the 22 items. The item response analysis revealed good item characteristic curves and item information for all the items. Based on the item parameters and item information, a final 12 item scale was developed. The scale performs optimally in the low to moderate trust range. Conclusions The final 12 item trust in physician scale has a good construct validity and internal consistency. PMID:25941182

  13. Factor Structure and Validation of a Set of Readiness Measures.

    ERIC Educational Resources Information Center

    Kaufman, Maurice; Lynch, Mervin

    A study was undertaken to identify the factor structure of a battery of readiness measures and to demonstrate the concurrent and predictive validity of one instrument in that battery--the Pre-Reading Screening Procedures (PSP). Concurrent validity was determined by examining the correlation of the PSP with the Metropolitan Readiness Test (MRT),…

  14. Cross-cultural validation of Lupus Impact Tracker in five European clinical practice settings.

    PubMed

    Schneider, Matthias; Mosca, Marta; Pego-Reigosa, José-Maria; Gunnarsson, Iva; Maurel, Frédérique; Garofano, Anna; Perna, Alessandra; Porcasi, Rolando; Devilliers, Hervé

    2017-05-01

    The aim was to evaluate the cross-cultural validity of the Lupus Impact Tracker (LIT) in five European countries and to assess its acceptability and feasibility from the patient and physician perspectives. A prospective, observational, cross-sectional and multicentre validation study was conducted in clinical settings. Before the visit, patients completed LIT, Short Form 36 (SF-36) and care satisfaction questionnaires. During the visit, physicians assessed disease activity [Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI], organ damage [SLICC/ACR damage index (SDI)] and flare occurrence. Cross-cultural validity was assessed using the Differential Item Functioning method. Five hundred and sixty-nine SLE patients were included by 25 specialists; 91.7% were outpatients and 89.9% female, with mean age 43.5 (13.0) years. Disease profile was as follows: 18.3% experienced flares; mean SELENA-SLEDAI score 3.4 (4.5); mean SDI score 0.8 (1.4); and SF-36 mean physical and mental component summary scores: physical component summary 42.8 (10.8) and mental component summary 43.0 (12.3). Mean LIT score was 34.2 (22.3) (median: 32.5), indicating that lupus moderately impacted patients' daily life. A cultural Differential Item Functioning of negligible magnitude was detected across countries (pseudo- R 2 difference of 0.01-0.04). Differences were observed between LIT scores and Physician Global Assessment, SELENA-SLEDAI, SDI scores = 0 (P < 0.035) and absence of flares (P = 0.004). The LIT showed a strong association with SF-36 physical and social role functioning, vitality, bodily pain and mental health (P < 0.001). The LIT was well accepted by patients and physicians. It was reliable, with Cronbach α coefficients ranging from 0.89 to 0.92 among countries. The LIT is validated in the five participating European countries. The results show its reliability and cultural invariability across countries. They suggest that LIT can be used in routine

  15. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  16. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  17. Endogenous protein "barcode" for data validation and normalization in quantitative MS analysis.

    PubMed

    Lee, Wooram; Lazar, Iulia M

    2014-07-01

    Quantitative proteomic experiments with mass spectrometry detection are typically conducted by using stable isotope labeling and label-free quantitation approaches. Proteins with housekeeping functions and stable expression level such actin, tubulin, and glyceraldehyde-3-phosphate dehydrogenase are frequently used as endogenous controls. Recent studies have shown that the expression level of such common housekeeping proteins is, in fact, dependent on various factors such as cell type, cell cycle, or disease status and can change in response to a biochemical stimulation. The interference of such phenomena can, therefore, substantially compromise their use for data validation, alter the interpretation of results, and lead to erroneous conclusions. In this work, we advance the concept of a protein "barcode" for data normalization and validation in quantitative proteomic experiments. The barcode comprises a novel set of proteins that was generated from cell cycle experiments performed with MCF7, an estrogen receptor positive breast cancer cell line, and MCF10A, a nontumorigenic immortalized breast cell line. The protein set was selected from a list of ~3700 proteins identified in different cellular subfractions and cell cycle stages of MCF7/MCF10A cells, based on the stability of spectral count data generated with an LTQ ion trap mass spectrometer. A total of 11 proteins qualified as endogenous standards for the nuclear and 62 for the cytoplasmic barcode, respectively. The validation of the protein sets was performed with a complementary SKBR3/Her2+ cell line.

  18. De-MetaST-BLAST: A Tool for the Validation of Degenerate Primer Sets and Data Mining of Publicly Available Metagenomes

    PubMed Central

    Gulvik, Christopher A.; Effler, T. Chad; Wilhelm, Steven W.; Buchan, Alison

    2012-01-01

    Development and use of primer sets to amplify nucleic acid sequences of interest is fundamental to studies spanning many life science disciplines. As such, the validation of primer sets is essential. Several computer programs have been created to aid in the initial selection of primer sequences that may or may not require multiple nucleotide combinations (i.e., degeneracies). Conversely, validation of primer specificity has remained largely unchanged for several decades, and there are currently few available programs that allows for an evaluation of primers containing degenerate nucleotide bases. To alleviate this gap, we developed the program De-MetaST that performs an in silico amplification using user defined nucleotide sequence dataset(s) and primer sequences that may contain degenerate bases. The program returns an output file that contains the in silico amplicons. When De-MetaST is paired with NCBI’s BLAST (De-MetaST-BLAST), the program also returns the top 10 nr NCBI database hits for each recovered in silico amplicon. While the original motivation for development of this search tool was degenerate primer validation using the wealth of nucleotide sequences available in environmental metagenome and metatranscriptome databases, this search tool has potential utility in many data mining applications. PMID:23189198

  19. Effects of number of training generations on genomic prediction for various traits in a layer chicken population.

    PubMed

    Weng, Ziqing; Wolc, Anna; Shen, Xia; Fernando, Rohan L; Dekkers, Jack C M; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Garrick, Dorian J

    2016-03-19

    Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line. Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated. On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best

  20. A technique for global monitoring of net solar irradiance at the ocean surface. II - Validation

    NASA Technical Reports Server (NTRS)

    Chertock, Beth; Frouin, Robert; Gautier, Catherine

    1992-01-01

    The generation and validation of the first satellite-based long-term record of surface solar irradiance over the global oceans are addressed. The record is generated using Nimbus-7 earth radiation budget (ERB) wide-field-of-view plentary-albedo data as input to a numerical algorithm designed and implemented based on radiative transfer theory. The mean monthly values of net surface solar irradiance are computed on a 9-deg latitude-longitude spatial grid for November 1978-October 1985. The new data set is validated in comparisons with short-term, regional, high-resolution, satellite-based records. The ERB-based values of net surface solar irradiance are compared with corresponding values based on radiance measurements taken by the Visible-Infrared Spin Scan Radiometer aboard GOES series satellites. Errors in the new data set are estimated to lie between 10 and 20 W/sq m on monthly time scales.

  1. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    PubMed

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  2. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    1999-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program

  3. Validity of the Perceived Health Competence Scale in a UK primary care setting.

    PubMed

    Dempster, Martin; Donnelly, Michael

    2008-01-01

    The Perceived Health Competence Scale (PHCS) is a measure of self-efficacy regarding general health-related behaviour. This brief paper examines the psychometric properties of the PHCS in a UK context. Questionnaires containing the PHCS, the SF-36 and questions about perceived health needs were posted to 486 patients randomly selected from a GP practice list. Complete questionnaires were returned by 320 patients. Analyses of these responses provide strong evidence for the validity of the PHCS in this setting. Consequently, we conclude that the PHCS is a useful addition to measures of global self-efficacy and measures of self-efficacy regarding specific behaviours in the toolkit of health psychologists. This range of self-efficacy assessment tools will ensure that psychologists can match the level of specificity of the measure of expectancy beliefs to the level of specificity of the outcome of interest.

  4. The International Classification of Functioning (ICF) core set for breast cancer from the perspective of women with the condition.

    PubMed

    Cooney, Marese; Galvin, Rose; Connolly, Elizabeth; Stokes, Emma

    2013-05-01

    The ICF Core Set for breast cancer was generated by international experts for women who have had surgery and radiation but it has not yet been validated. The objective of the study was to validate the ICF Core Set from the perspective of women with breast cancer. A qualitative focus group methodology was used. The sessions were transcribed verbatim. Meaning units were identified by two independent researchers. The agreed list was subsequently linked to ICF categories by two independent researchers according to pre-defined linking rules. Data saturation determined the number of focus groups conducted. Quality of the data analyses was assured by multiple coding and peer review. Thirty-four women participated in seven focus groups. A total of 1621 meaning units were identified which were linked to 74 of the existing 80 Core Set categories. Additional ICF categories not currently included in the Core Set were identified by the women. The validity of the Core Set was largely supported. However, some categories currently not covered by the ICF Core Set for Breast Cancer will need to be considered for inclusion if the Core Set is to reflect all women who have had treatment for breast cancer

  5. Requirements for a loophole-free photonic Bell test using imperfect setting generators

    NASA Astrophysics Data System (ADS)

    Kofler, Johannes; Giustina, Marissa; Larsson, Jan-Åke; Mitchell, Morgan W.

    2016-03-01

    Experimental violations of Bell inequalities are in general vulnerable to so-called loopholes. In this work, we analyze the characteristics of a loophole-free Bell test with photons, closing simultaneously the locality, freedom-of-choice, fair-sampling (i.e., detection), coincidence-time, and memory loopholes. We pay special attention to the effect of excess predictability in the setting choices due to nonideal random-number generators. We discuss necessary adaptations of the Clauser-Horne and Eberhard inequality when using such imperfect devices and—using Hoeffding's inequality and Doob's optional stopping theorem—the statistical analysis in such Bell tests.

  6. Generation of ELGA-compatible radiology reports from the Vienna Hospital Association's EHR system.

    PubMed

    Haider, Jasmin; Hölzl, Konrad; Toth, Herlinde; Duftschmid, Georg

    2014-01-01

    In the course of setting up the upcoming Austrian national shared EHR system ELGA, adaptors will have to be implemented for the local EHR systems of all participating healthcare providers. These adaptors must be able to transform EHR data from the internal format of the particular local EHR system to the specified format of the ELGA document types and vice versa. In the course of an ongoing diploma thesis we are currently developing a transformation application that shall allow the generation of ELGA-compatible radiology reports from the local EHR system of the Vienna Hospital Association. Up to now a first prototype has been developed that was tested with six radiology reports. It generates technically valid ELGA radiology reports apart from two errors yielded by the ELGA online validator that rather seem to be bugs of the validator. A medical validation of the reports remains to be done.

  7. htsint: a Python library for sequencing pipelines that combines data through gene set generation.

    PubMed

    Richards, Adam J; Herrel, Anthony; Bonneaud, Camille

    2015-09-24

    Sequencing technologies provide a wealth of details in terms of genes, expression, splice variants, polymorphisms, and other features. A standard for sequencing analysis pipelines is to put genomic or transcriptomic features into a context of known functional information, but the relationships between ontology terms are often ignored. For RNA-Seq, considering genes and their genetic variants at the group level enables a convenient way to both integrate annotation data and detect small coordinated changes between experimental conditions, a known caveat of gene level analyses. We introduce the high throughput data integration tool, htsint, as an extension to the commonly used gene set enrichment frameworks. The central aim of htsint is to compile annotation information from one or more taxa in order to calculate functional distances among all genes in a specified gene space. Spectral clustering is then used to partition the genes, thereby generating functional modules. The gene space can range from a targeted list of genes, like a specific pathway, all the way to an ensemble of genomes. Given a collection of gene sets and a count matrix of transcriptomic features (e.g. expression, polymorphisms), the gene sets produced by htsint can be tested for 'enrichment' or conditional differences using one of a number of commonly available packages. The database and bundled tools to generate functional modules were designed with sequencing pipelines in mind, but the toolkit nature of htsint allows it to also be used in other areas of genomics. The software is freely available as a Python library through GitHub at https://github.com/ajrichards/htsint.

  8. Making User-Generated Content Communities Work in Higher Education - The Importance of Setting Incentives

    NASA Astrophysics Data System (ADS)

    Vom Brocke, Jan; White, Cynthia; Walker, Ute; Vom Brocke, Christina

    The concept of User-Generated Content (UGC) offers impressive potential for innovative learning and teaching scenarios in higher education. Examples like Wikipedia and Facebook illustrate the enormous effects of multiple users world-wide contributing to a pool of shared resources, such as videos and pictures and also lexicographical descriptions. Apart from single examples, however, the systematic use of these virtual technologies in higher education still needs further exploration. Only few examples display the successful application of UGC Communities at university scenarios. We argue that a major reason for this can be seen in the fact that the organizational dimension of setting up UGC Communities has widely been neglected so far. In particular, we indicate the need for incentive setting to actively involve students and achieve specific pedagogical objectives. We base our study on organizational theories and derive strategies for incentive setting that have been applied in a practical e-Learning scenario involving students from Germany and New Zealand.

  9. Content validation of the international classification of functioning, disability and health core set for stroke from gender perspective using a qualitative approach.

    PubMed

    Glässel, A; Coenen, M; Kollerits, B; Cieza, A

    2014-06-01

    The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.

  10. Losing Generations: Adolescents in High-Risk Settings.

    ERIC Educational Resources Information Center

    National Academy of Sciences - National Research Council, Washington, DC. Commission on Behavioral and Social Sciences and Education.

    By focusing on the settings and environments in which high-risk young people are living, this book fixes responsibility on society as a whole. High-risk settings do not just happen, but are the result of public policies and national choices. The Panel on High-Risk Youth of the National Research Council attempts to clarify forces tearing apart…

  11. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  13. A new test for the assessment of working memory in clinical settings: Validation and norming of a month ordering task.

    PubMed

    Buekenhout, Imke; Leitão, José; Gomes, Ana A

    2018-05-24

    Month ordering tasks have been used in experimental settings to obtain measures of working memory (WM) capacity in older/clinical groups based solely on their face validity. We sought to assess the appropriateness of using a month ordering task in other contexts, including clinical settings, as a psychometrically sound WM assessment. To this end, we constructed a month ordering task (ucMOT), studied its reliability (internal consistency and temporal stability), and gathered construct-related and criterion-related validity evidence for its use as a WM assessment. The ucMOT proved to be internally consistent and temporally stable, and analyses of the criterion-related validity evidence revealed that its scores predicted the efficiency of language comprehension processes known to depend crucially on WM resources, namely, processes involved in pronoun interpretation. Furthermore, all ucMOT items discriminated between younger and older age groups; the global scores were significantly correlated with scores on well-established WM tasks and presented lower correlations with instruments that evaluate different (although related) processes, namely, inhibition and processing speed. We conclude that the ucMOT possesses solid psychometric properties. Accordingly, we acquired normative data for the Portuguese population, which we present as a regression-based algorithm that yields z scores adjusted for age, gender, and years of formal education. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Validation of "AW3D" Global Dsm Generated from Alos Prism

    NASA Astrophysics Data System (ADS)

    Takaku, Junichi; Tadono, Takeo; Tsutsui, Ken; Ichikawa, Mayumi

    2016-06-01

    Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried by Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. It has an exclusive ability to perform a triplet stereo observation which views forward, nadir, and backward along the satellite track in 2.5 m ground resolution, and collected its derived images all over the world during the mission life of the satellite from 2006 through 2011. A new project, which generates global elevation datasets with the image archives, was started in 2014. The data is processed in unprecedented 5 m grid spacing utilizing the original triplet stereo images in 2.5 m resolution. As the number of processed data is growing steadily so that the global land areas are almost covered, a trend of global data qualities became apparent. This paper reports on up-to-date results of the validations for the accuracy of data products as well as the status of data coverage in global areas. The accuracies and error characteristics of datasets are analyzed by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data, as well as ground control points (GCPs) and the reference Digital Elevation Model (DEM) derived from the airborne Light Detection and Ranging (LiDAR).

  15. The Geostationary Operational Environmental Satellite (GOES) Product Generation System

    NASA Technical Reports Server (NTRS)

    Haines, S. L.; Suggs, R. J.; Jedlovec, G. J.

    2004-01-01

    The Geostationary Operational Environmental Satellite (GOES) Product Generation System (GPGS) is introduced and described. GPGS is a set of computer programs developed and maintained at the Global Hydrology and Climate Center and is designed to generate meteorological data products using visible and infrared measurements from the GOES-East Imager and Sounder instruments. The products that are produced by GPGS are skin temperature, total precipitable water, cloud top pressure, cloud albedo, surface albedo, and surface insolation. A robust cloud mask is also generated. The retrieval methodology for each product is described to include algorithm descriptions and required inputs and outputs for the programs. Validation is supplied where applicable.

  16. Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting

    PubMed Central

    Kristensen, Solvejg; Sabroe, Svend; Bartels, Paul; Mainz, Jan; Christensen, Karl Bang

    2015-01-01

    Purpose Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee’s perceptions of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ). The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data. Materials and methods The SAQ was translated and adapted for the Danish setting (SAQ-DK). The SAQ-DK was distributed to 1,263 staff members from 31 in- and outpatient units (clinical areas) across five somatic and one psychiatric hospitals through meeting administration, hand delivery, and mailing. Construct validity and reliability were tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach’s alpha (α), and item and subscale scores. Results Participation was 73.2% (N=925) of invited health care workers. Goodness-of-fit indices from the confirmatory factor analysis showed: c2=1496.76, P<0.001, CFI 0.901, RMSEA (90% CI) 0.053 (0.050–0056), Probability RMSEA (p close)=0.057. Inter-scale correlations between the factors showed moderate-to-high correlations. The scale stress recognition had significant negative correlations with each of the other scales. Questionnaire reliability was high, (α=0.89), and scale reliability ranged from α=0.70 to α=0.86 for the six scales. Proportions of participants with a positive attitude to each of the six SAQ scales did not differ between the somatic and psychiatric health care staff. Substantial variability at the unit level in all six scale mean scores was found within the somatic and the psychiatric samples. Conclusion SAQ-DK showed good construct validity and

  17. A computer-generated animated face stimulus set for psychophysiological research

    PubMed Central

    Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James

    2014-01-01

    Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164

  18. Norming the odd: creation, norming, and validation of a stimulus set for the study of incongruities across music and language.

    PubMed

    Featherstone, Cara R; Waterman, Mitch G; Morrison, Catriona M

    2012-03-01

    Research into similarities between music and language processing is currently experiencing a strong renewed interest. Recent methodological advances have led to neuroimaging studies presenting striking similarities between neural patterns associated with the processing of music and language--notably, in the study of participants' responses to elements that are incongruous with their musical or linguistic context. Responding to a call for greater systematicity by leading researchers in the field of music and language psychology, this article describes the creation, selection, and validation of a set of auditory stimuli in which both congruence and resolution were manipulated in equivalent ways across harmony, rhythm, semantics, and syntax. Three conditions were created by changing the contexts preceding and following musical and linguistic incongruities originally used for effect by authors and composers: Stimuli in the incongruous-resolved condition reproduced the original incongruity and resolution into the same context; stimuli in the incongruous-unresolved condition reproduced the incongruity but continued postincongruity with a new context dictated by the incongruity; and stimuli in the congruous condition presented the same element of interest, but the entire context was adapted to match it so that it was no longer incongruous. The manipulations described in this article rendered unrecognizable the original incongruities from which the stimuli were adapted, while maintaining ecological validity. The norming procedure and validation study resulted in a significant increase in perceived oddity from congruous to incongruous-resolved and from incongruous-resolved to incongruous-unresolved in all four components of music and language, making this set of stimuli a theoretically grounded and empirically validated resource for this growing area of research.

  19. Rough Evaluation Structure: Application of Rough Set Theory to Generate Simple Rules for Inconsistent Preference Relation

    NASA Astrophysics Data System (ADS)

    Gehrmann, Andreas; Nagai, Yoshimitsu; Yoshida, Osamu; Ishizu, Syohei

    Since management decision-making becomes complex and preferences of the decision-maker frequently becomes inconsistent, multi-attribute decision-making problems were studied. To represent inconsistent preference relation, the concept of evaluation structure was introduced. We can generate simple rules to represent inconsistent preference relation by the evaluation structures. Further rough set theory for the preference relation was studied and the concept of approximation was introduced. One of our main aims of this paper is to introduce a concept of rough evaluation structure for representing inconsistent preference relation. We apply rough set theory to the evaluation structure, and develop a method for generating simple rules for inconsistent preference relations. In this paper, we introduce concepts of totally ordered information system, similarity class of preference relation, upper and lower approximation of preference relations. We also show the properties of rough evaluation structure and provide a simple example. As an application of rough evaluation structure, we analyze questionnaire survey of customer preferences about audio players.

  20. Validity and Interrater Reliability of the Visual Quarter-Waste Method for Assessing Food Waste in Middle School and High School Cafeteria Settings.

    PubMed

    Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J

    2017-11-01

    Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  1. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  2. ConfocalGN: A minimalistic confocal image generator

    NASA Astrophysics Data System (ADS)

    Dmitrieff, Serge; Nédélec, François

    Validating image analysis pipelines and training machine-learning segmentation algorithms require images with known features. Synthetic images can be used for this purpose, with the advantage that large reference sets can be produced easily. It is however essential to obtain images that are as realistic as possible in terms of noise and resolution, which is challenging in the field of microscopy. We describe ConfocalGN, a user-friendly software that can generate synthetic microscopy stacks from a ground truth (i.e. the observed object) specified as a 3D bitmap or a list of fluorophore coordinates. This software can analyze a real microscope image stack to set the noise parameters and directly generate new images of the object with noise characteristics similar to that of the sample image. With a minimal input from the user and a modular architecture, ConfocalGN is easily integrated with existing image analysis solutions.

  3. USAF bioenvironmental noise data handbook. Volume 162: MD-4MO generator set

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The MD-4MO generator set is an electric motor-driven source of electrical power used primarily for the starting of aircraft, and for ground maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at a normal rated condition. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference levels, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors.

  4. Development and validation of a socioculturally competent trust in physician scale for a developing country setting.

    PubMed

    Gopichandran, Vijayaprasad; Wouters, Edwin; Chetlapalli, Satish Kumar

    2015-05-03

    Trust in physicians is the unwritten covenant between the patient and the physician that the physician will do what is in the best interest of the patient. This forms the undercurrent of all healthcare relationships. Several scales exist for assessment of trust in physicians in developed healthcare settings, but to our knowledge none of these have been developed in a developing country context. To develop and validate a new trust in physician scale for a developing country setting. Dimensions of trust in physicians, which were identified in a previous qualitative study in the same setting, were used to develop a scale. This scale was administered among 616 adults selected from urban and rural areas of Tamil Nadu, south India, using a multistage sampling cross sectional survey method. The individual items were analysed using a classical test approach as well as item response theory. Cronbach's α was calculated and the item to total correlation of each item was assessed. After testing for unidimensionality and absence of local dependence, a 2 parameter logistic Semajima's graded response model was fit and item characteristics assessed. Competence, assurance of treatment, respect for the physician and loyalty to the physician were important dimensions of trust. A total of 31 items were developed using these dimensions. Of these, 22 were selected for final analysis. The Cronbach's α was 0.928. The item to total correlations were acceptable for all the 22 items. The item response analysis revealed good item characteristic curves and item information for all the items. Based on the item parameters and item information, a final 12 item scale was developed. The scale performs optimally in the low to moderate trust range. The final 12 item trust in physician scale has a good construct validity and internal consistency. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Identification of rheumatoid arthritis and osteoarthritis patients by transcriptome-based rule set generation

    PubMed Central

    2014-01-01

    Introduction Discrimination of rheumatoid arthritis (RA) patients from patients with other inflammatory or degenerative joint diseases or healthy individuals purely on the basis of genes differentially expressed in high-throughput data has proven very difficult. Thus, the present study sought to achieve such discrimination by employing a novel unbiased approach using rule-based classifiers. Methods Three multi-center genome-wide transcriptomic data sets (Affymetrix HG-U133 A/B) from a total of 79 individuals, including 20 healthy controls (control group - CG), as well as 26 osteoarthritis (OA) and 33 RA patients, were used to infer rule-based classifiers to discriminate the disease groups. The rules were ranked with respect to Kiendl’s statistical relevance index, and the resulting rule set was optimized by pruning. The rule sets were inferred separately from data of one of three centers and applied to the two remaining centers for validation. All rules from the optimized rule sets of all centers were used to analyze their biological relevance applying the software Pathway Studio. Results The optimized rule sets for the three centers contained a total of 29, 20, and 8 rules (including 10, 8, and 4 rules for ‘RA’), respectively. The mean sensitivity for the prediction of RA based on six center-to-center tests was 96% (range 90% to 100%), that for OA 86% (range 40% to 100%). The mean specificity for RA prediction was 94% (range 80% to 100%), that for OA 96% (range 83.3% to 100%). The average overall accuracy of the three different rule-based classifiers was 91% (range 80% to 100%). Unbiased analyses by Pathway Studio of the gene sets obtained by discrimination of RA from OA and CG with rule-based classifiers resulted in the identification of the pathogenetically and/or therapeutically relevant interferon-gamma and GM-CSF pathways. Conclusion First-time application of rule-based classifiers for the discrimination of RA resulted in high performance, with means

  6. OZONE GENERATORS IN INDOOR AIR SETTINGS

    EPA Science Inventory

    The report gives information on home/office ozone generators. It discusses their current uses as amelioratives for environmental tobacco smoke, biocontaminants, volatile organic compounds, and odors and details the advantages and disadvantages of each. Ozone appears to work well ...

  7. Development and validation of a set of six adaptable prognosis prediction (SAP) models based on time-series real-world big data analysis for patients with cancer receiving chemotherapy: A multicenter case crossover study

    PubMed Central

    Kanai, Masashi; Okamoto, Kazuya; Yamamoto, Yosuke; Yoshioka, Akira; Hiramoto, Shuji; Nozaki, Akira; Nishikawa, Yoshitaka; Yamaguchi, Daisuke; Tomono, Teruko; Nakatsui, Masahiko; Baba, Mika; Morita, Tatsuya; Matsumoto, Shigemi; Kuroda, Tomohiro; Okuno, Yasushi; Muto, Manabu

    2017-01-01

    Background We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data. Methods Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341) and test (n = 1,352) cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb) levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880) were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings. Results A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1–6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC) ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models. Conclusion By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy. PMID:28837592

  8. Strong Generative Capacity and the Empirical Base of Linguistic Theory

    PubMed Central

    Ott, Dennis

    2017-01-01

    This Perspective traces the evolution of certain central notions in the theory of Generative Grammar (GG). The founding documents of the field suggested a relation between the grammar, construed as recursively enumerating an infinite set of sentences, and the idealized native speaker that was essentially equivalent to the relation between a formal language (a set of well-formed formulas) and an automaton that recognizes strings as belonging to the language or not. But this early view was later abandoned, when the focus of the field shifted to the grammar's strong generative capacity as recursive generation of hierarchically structured objects as opposed to strings. The grammar is now no longer seen as specifying a set of well-formed expressions and in fact necessarily constructs expressions of any degree of intuitive “acceptability.” The field of GG, however, has not sufficiently acknowledged the significance of this shift in perspective, as evidenced by the fact that (informal and experimentally-controlled) observations about string acceptability continue to be treated as bona fide data and generalizations for the theory of GG. The focus on strong generative capacity, it is argued, requires a new discussion of what constitutes valid empirical evidence for GG beyond observations pertaining to weak generation. PMID:28983268

  9. Workshop on Strategies for Calibration and Validation of Global Change Measurements

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce; Butler, James; Ardanuy, Philip

    1997-01-01

    The Committee on Environment and Natural Resources (CENR) Task Force on Observations and Data Management hosted a Global Change Calibration/Validation Workshop on May 10-12, 1995, in Arlington, Virginia. This Workshop was convened by Robert Schiffer of NASA Headquarters in Washington, D.C., for the CENR Secretariat with a view toward assessing and documenting lessons learned in the calibration and validation of large-scale, long-term data sets in land, ocean, and atmospheric research programs. The National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC) hosted the meeting on behalf of the Committee on Earth Observation Satellites (CEOS)/Working Group on Calibration/walidation, the Global Change Observing System (GCOS), and the U. S. CENR. A meeting of experts from the international scientific community was brought together to develop recommendations for calibration and validation of global change data sets taken from instrument series and across generations of instruments and technologies. Forty-nine scientists from nine countries participated. The U. S., Canada, United Kingdom, France, Germany, Japan, Switzerland, Russia, and Kenya were represented.

  10. Review of TRMM/GPM Rainfall Algorithm Validation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2004-01-01

    A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.

  11. Microstructure Modeling of 3rd Generation Disk Alloys

    NASA Technical Reports Server (NTRS)

    Jou, Herng-Jeng

    2010-01-01

    The objective of this program is to model, validate, and predict the precipitation microstructure evolution, using PrecipiCalc (QuesTek Innovations LLC) software, for 3rd generation Ni-based gas turbine disc superalloys during processing and service, with a set of logical and consistent experiments and characterizations. Furthermore, within this program, the originally research-oriented microstructure simulation tool will be further improved and implemented to be a useful and user-friendly engineering tool. In this report, the key accomplishment achieved during the second year (2008) of the program is summarized. The activities of this year include final selection of multicomponent thermodynamics and mobility databases, precipitate surface energy determination from nucleation experiment, multiscale comparison of predicted versus measured intragrain precipitation microstructure in quench samples showing good agreement, isothermal coarsening experiment and interaction of grain boundary and intergrain precipitates, primary microstructure of subsolvus treatment, and finally the software implementation plan for the third year of the project. In the following year, the calibrated models and simulation tools will be validated against an independently developed experimental data set, with actual disc heat treatment process conditions. Furthermore, software integration and implementation will be developed to provide material engineers valuable information in order to optimize the processing of the 3rd generation gas turbine disc alloys.

  12. Power Plant Model Validation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less

  13. PCC Framework for Program-Generators

    NASA Technical Reports Server (NTRS)

    Kong, Soonho; Choi, Wontae; Yi, Kwangkeun

    2009-01-01

    In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.

  14. Generation and validation of homozygous fluorescent knock-in cells using CRISPR-Cas9 genome editing.

    PubMed

    Koch, Birgit; Nijmeijer, Bianca; Kueblbeck, Moritz; Cai, Yin; Walther, Nike; Ellenberg, Jan

    2018-06-01

    Gene tagging with fluorescent proteins is essential for investigations of the dynamic properties of cellular proteins. CRISPR-Cas9 technology is a powerful tool for inserting fluorescent markers into all alleles of the gene of interest (GOI) and allows functionality and physiological expression of the fusion protein. It is essential to evaluate such genome-edited cell lines carefully in order to preclude off-target effects caused by (i) incorrect insertion of the fluorescent protein, (ii) perturbation of the fusion protein by the fluorescent proteins or (iii) nonspecific genomic DNA damage by CRISPR-Cas9. In this protocol, we provide a step-by-step description of our systematic pipeline to generate and validate homozygous fluorescent knock-in cell lines.We have used the paired Cas9D10A nickase approach to efficiently insert tags into specific genomic loci via homology-directed repair (HDR) with minimal off-target effects. It is time-consuming and costly to perform whole-genome sequencing of each cell clone to check for spontaneous genetic variations occurring in mammalian cell lines. Therefore, we have developed an efficient validation pipeline of the generated cell lines consisting of junction PCR, Southern blotting analysis, Sanger sequencing, microscopy, western blotting analysis and live-cell imaging for cell-cycle dynamics. This protocol takes between 6 and 9 weeks. With this protocol, up to 70% of the targeted genes can be tagged homozygously with fluorescent proteins, thus resulting in physiological levels and phenotypically functional expression of the fusion proteins.

  15. The 11-item Medication Adherence Reasons Scale: reliability and factorial validity among patients with hypertension in Malaysian primary healthcare settings.

    PubMed

    Shima, Razatul; Farizah, Hairi; Majid, Hazreen Abdul

    2015-08-01

    The aim of this study was to assess the reliability and validity of a modified Malaysian version of the Medication Adherence Reasons Scale (MAR-Scale). In this cross-sectional study, the 15-item MAR-Scale was administered to 665 patients with hypertension who attended one of the four government primary healthcare clinics in the Hulu Langat and Klang districts of Selangor, Malaysia, between early December 2012 and end-March 2013. The construct validity was examined in two phases. Phase I consisted of translation of the MAR-Scale from English to Malay, a content validity check by an expert panel, a face validity check via a small preliminary test among patients with hypertension, and exploratory factor analysis (EFA). Phase II involved internal consistency reliability calculations and confirmatory factor analysis (CFA). EFA verified five existing factors that were previously identified (i.e. issues with medication management, multiple medications, belief in medication, medication availability, and the patient's forgetfulness and convenience), while CFA extracted four factors (medication availability issues were not extracted). The final modified MAR-Scale model, which had 11 items and a four-factor structure, provided good evidence of convergent and discriminant validities. Cronbach's alpha coefficient was > 0.7, indicating good internal consistency of the items in the construct. The results suggest that the modified MAR-Scale has good internal consistencies and construct validity. The validated modified MAR-Scale (Malaysian version) was found to be suitable for use among patients with hypertension receiving treatment in primary healthcare settings. However, the comprehensive measurement of other factors that can also lead to non-adherence requires further exploration.

  16. Flight evaluation of advanced third-generation midwave infrared sensor

    NASA Astrophysics Data System (ADS)

    Shen, Chyau N.; Donn, Matthew

    1998-08-01

    In FY-97 the Counter Drug Optical Upgrade (CDOU) demonstration program was initiated by the Program Executive Office for Counter Drug to increase the detection and classification ranges of P-3 counter drug aircraft by using advanced staring infrared sensors. The demonstration hardware is a `pin-for-pin' replacement of the AAS-36 Infrared Detection Set (IRDS) located under the nose radome of a P-3 aircraft. The hardware consists of a 3rd generation mid-wave infrared (MWIR) sensor integrated into a three axis-stabilized turret. The sensor, when installed on the P- 3, has a hemispheric field of regard and analysis has shown it will be capable of detecting and classifying Suspected Drug Trafficking Aircraft and Vessels at ranges several factors over the current IRDS. This paper will discuss the CDOU system and it's lab, ground, and flight evaluation results. Test targets included target templates, range targets, dedicated target boats, and targets of opportunity at the Naval Air Warfare Center Aircraft Division and at operational test sites. The objectives of these tests were to: (1) Validate the integration concept of the CDOU package into the P-3 aircraft. (2) Validate the end-to-end functionality of the system, including sensor/turret controls and recording of imagery during flight. (3) Evaluate the system sensitivity and resolution on a set of verified resolution targets templates. (4) Validate the ability of the 3rd generation MWIR sensor to detect and classify targets at a significantly increased range.

  17. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    PubMed

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, p<0.001) and was sensitive to change following training in acute and mental health settings, across professional groups (p<0.001). Confirmatory factor analysis revealed an adequate model fit (RMSEA=0.066). The Human Factors Skills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  18. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  19. Reliable pre-eclampsia pathways based on multiple independent microarray data sets.

    PubMed

    Kawasaki, Kaoru; Kondoh, Eiji; Chigusa, Yoshitsugu; Ujita, Mari; Murakami, Ryusuke; Mogami, Haruta; Brown, J B; Okuno, Yasushi; Konishi, Ikuo

    2015-02-01

    Pre-eclampsia is a multifactorial disorder characterized by heterogeneous clinical manifestations. Gene expression profiling of preeclamptic placenta have provided different and even opposite results, partly due to data compromised by various experimental artefacts. Here we aimed to identify reliable pre-eclampsia-specific pathways using multiple independent microarray data sets. Gene expression data of control and preeclamptic placentas were obtained from Gene Expression Omnibus. Single-sample gene-set enrichment analysis was performed to generate gene-set activation scores of 9707 pathways obtained from the Molecular Signatures Database. Candidate pathways were identified by t-test-based screening using data sets, GSE10588, GSE14722 and GSE25906. Additionally, recursive feature elimination was applied to arrive at a further reduced set of pathways. To assess the validity of the pre-eclampsia pathways, a statistically-validated protocol was executed using five data sets including two independent other validation data sets, GSE30186, GSE44711. Quantitative real-time PCR was performed for genes in a panel of potential pre-eclampsia pathways using placentas of 20 women with normal or severe preeclamptic singleton pregnancies (n = 10, respectively). A panel of ten pathways were found to discriminate women with pre-eclampsia from controls with high accuracy. Among these were pathways not previously associated with pre-eclampsia, such as the GABA receptor pathway, as well as pathways that have already been linked to pre-eclampsia, such as the glutathione and CDKN1C pathways. mRNA expression of GABRA3 (GABA receptor pathway), GCLC and GCLM (glutathione metabolic pathway), and CDKN1C was significantly reduced in the preeclamptic placentas. In conclusion, ten accurate and reliable pre-eclampsia pathways were identified based on multiple independent microarray data sets. A pathway-based classification may be a worthwhile approach to elucidate the pathogenesis of pre

  20. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    PubMed

    Li, Der-Chiang; Hu, Susan C; Lin, Liang-Sian; Yeh, Chun-Wu

    2017-01-01

    It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP) method merging in the D3C method (PPDP+D3C) with those of the one-sided selection (OSS), the well-known SMOTEBoost (SB) study, and the normal distribution-based oversampling (NDO) approach, and the proposed data pre-processing (PPDP) method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  1. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    PubMed Central

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494

  2. Evaluation, modification and validation of a set of asthma illustrations in children with chronic asthma in the emergency department

    PubMed Central

    Tulloch, Joanie; Vaillancourt, Régis; Irwin, Danica; Pascuet, Elena

    2012-01-01

    OBJECTIVES: To test, modify and validate a set of illustrations depicting different levels of asthma control and common asthma triggers in pediatric patients (and/or their parents) with chronic asthma who presented to the emergency department at the Children’s Hospital of Eastern Ontario, Ottawa, Ontario. METHODS: Semistructured interviews using guessability and translucency questionnaires tested the comprehensibility of 15 illustrations depicting different levels of asthma control and common asthma triggers in children 10 to 17 years of age, and parents of children one to nine years of age who presented to the emergency department. Illustrations with an overall guessability score <80% and/or translucency median score <6, were reviewed by the study team and modified by the study’s graphic designer. Modifications were made based on key concepts identified by study participants. RESULTS: A total of 80 patients were interviewed. Seven of the original 15 illustrations (47%) required modifications to obtain the prespecified guessability and translucency goals. CONCLUSION: The authors successfully developed, modified and validated a set of 15 illustrations representing different levels of asthma control and common asthma triggers. PRACTICE IMPLICATIONS: These illustrations will be incorporated into a child-friendly asthma action plan that enables the child to be involved in his or her asthma self-management care. PMID:22332128

  3. Evaluation, modification and validation of a set of asthma illustrations in children with chronic asthma in the emergency department.

    PubMed

    Tulloch, Joanie; Irwin, Danica; Pascuet, Elena; Vaillancourt, Régis

    2012-01-01

    To test, modify and validate a set of illustrations depicting different levels of asthma control and common asthma triggers in pediatric patients (and⁄or their parents) with chronic asthma who presented to the emergency department at the Children's Hospital of Eastern Ontario, Ottawa, Ontario. Semistructured interviews using guessability and translucency questionnaires tested the comprehensibility of 15 illustrations depicting different levels of asthma control and common asthma triggers in children 10 to 17 years of age, and parents of children one to nine years of age who presented to the emergency department. Illustrations with an overall guessability score <80% and⁄or translucency median score <6, were reviewed by the study team and modified by the study's graphic designer. Modifications were made based on key concepts identified by study participants. A total of 80 patients were interviewed. Seven of the original 15 illustrations (47%) required modifications to obtain the prespecified guessability and translucency goals. The authors successfully developed, modified and validated a set of 15 illustrations representing different levels of asthma control and common asthma triggers. These illustrations will be incorporated into a child-friendly asthma action plan that enables the child to be involved in his or her asthma self-management care.

  4. SET-bullying: presentation of a collaborative project and discussion of its internal and external validity.

    PubMed

    Chalamandaris, Alexandros-Georgios; Wilmet-Dramaix, Michèle; Eslea, Mike; Ertesvåg, Sigrun Karin; Piette, Danielle

    2016-04-12

    Since the early 1980s, several school based anti-bullying interventions (SBABI) have been implemented and evaluated in different countries. Some meta-analyses have also drawn conclusions on the effectiveness of SBABIs. However, the relationship between time and effectiveness of SBABIs has not been fully studied. For this aim, a collaborative project, SET-Bullying, is established by researchers from Greece, Belgium, Norway and United Kingdom. Its primary objective is to further understand and statistically model the relationship between the time and the sustainability of the effectiveness of SBABI. The secondary objective of SET-Bullying is to assess the possibility of predicting the medium-term or long-term effectiveness using as key information the prior measurement and the short-term effectiveness of the intervention. Researchers and owners of potentially eligible databases were asked to participate in this effort. Two studies have contributed data for the purpose of SET-Bullying. This paper summarizes the main characteristics of the participating studies and provides a high level overview of the collaborative project. It also discusses on the extent to which both study and project characteristics may pose threats to the expected internal and external validity of the potential outcomes of the project. Despite these threats, this work represents the first effort to understand the impact of time on the observed effectiveness of SBABIs and assess its predictability, which would allow for better planning, implementation and evaluation of SBABIs.

  5. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    PubMed

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  6. Goal setting as an outcome measure: A systematic review.

    PubMed

    Hurn, Jane; Kneebone, Ian; Cropley, Mark

    2006-09-01

    Goal achievement has been considered to be an important measure of outcome by clinicians working with patients in physical and neurological rehabilitation settings. This systematic review was undertaken to examine the reliability, validity and sensitivity of goal setting and goal attainment scaling approaches when used with working age and older people. To review the reliability, validity and sensitivity of both goal setting and goal attainment scaling when employed as an outcome measure within a physical and neurological working age and older person rehabilitation environment, by examining the research literature covering the 36 years since goal-setting theory was proposed. Data sources included a computer-aided literature search of published studies examining the reliability, validity and sensitivity of goal setting/goal attainment scaling, with further references sourced from articles obtained through this process. There is strong evidence for the reliability, validity and sensitivity of goal attainment scaling. Empirical support was found for the validity of goal setting but research demonstrating its reliability and sensitivity is limited. Goal attainment scaling appears to be a sound measure for use in physical rehabilitation settings with working age and older people. Further work needs to be carried out with goal setting to establish its reliability and sensitivity as a measurement tool.

  7. Strategies for the generation, validation and application of in silico ADMET models in lead generation and optimization.

    PubMed

    Gleeson, Matthew Paul; Montanari, Dino

    2012-11-01

    The most desirable chemical starting point in drug discovery is a hit or lead with a good overall profile, and where there may be issues; a clear SAR strategy should be identifiable to minimize the issue. Filtering based on drug-likeness concepts are a first step, but more accurate theoretical methods are needed to i) estimate the biological profile of molecule in question and ii) based on the underlying structure-activity relationships used by the model, estimate whether it is likely that the molecule in question can be altered to remove these liabilities. In this paper, the authors discuss the generation of ADMET models and their practical use in decision making. They discuss the issues surrounding data collation, experimental errors, the model assessment and validation steps, as well as the different types of descriptors and statistical models that can be used. This is followed by a discussion on how the model accuracy will dictate when and where it can be used in the drug discovery process. The authors also discuss how models can be developed to more effectively enable multiple parameter optimization. Models can be applied in lead generation and lead optimization steps to i) rank order a collection of hits, ii) prioritize the experimental assays needed for different hit series, iii) assess the likelihood of resolving a problem that might be present in a particular series in lead optimization and iv) screen a virtual library based on a hit or lead series to assess the impact of diverse structural changes on the predicted properties.

  8. Generation of a suite of 3D computer-generated breast phantoms from a limited set of human subject data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Christina M. L.; Palmeri, Mark L.; Department of Anesthesiology, Duke University Medical Center, Durham, North Carolina 27710

    2013-04-15

    Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphingmore » technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the 'base' and 'target' for morphing. Several combinations of transformations were applied to morph between the 'base' and 'target' datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms, generated from morphing three

  9. The 11-item Medication Adherence Reasons Scale: reliability and factorial validity among patients with hypertension in Malaysian primary healthcare settings

    PubMed Central

    Shima, Razatul; Farizah, Hairi; Majid, Hazreen Abdul

    2015-01-01

    INTRODUCTION The aim of this study was to assess the reliability and validity of a modified Malaysian version of the Medication Adherence Reasons Scale (MAR-Scale). METHODS In this cross-sectional study, the 15-item MAR-Scale was administered to 665 patients with hypertension who attended one of the four government primary healthcare clinics in the Hulu Langat and Klang districts of Selangor, Malaysia, between early December 2012 and end-March 2013. The construct validity was examined in two phases. Phase I consisted of translation of the MAR-Scale from English to Malay, a content validity check by an expert panel, a face validity check via a small preliminary test among patients with hypertension, and exploratory factor analysis (EFA). Phase II involved internal consistency reliability calculations and confirmatory factor analysis (CFA). RESULTS EFA verified five existing factors that were previously identified (i.e. issues with medication management, multiple medications, belief in medication, medication availability, and the patient’s forgetfulness and convenience), while CFA extracted four factors (medication availability issues were not extracted). The final modified MAR-Scale model, which had 11 items and a four-factor structure, provided good evidence of convergent and discriminant validities. Cronbach’s alpha coefficient was > 0.7, indicating good internal consistency of the items in the construct. The results suggest that the modified MAR-Scale has good internal consistencies and construct validity. CONCLUSION The validated modified MAR-Scale (Malaysian version) was found to be suitable for use among patients with hypertension receiving treatment in primary healthcare settings. However, the comprehensive measurement of other factors that can also lead to non-adherence requires further exploration. PMID:25902719

  10. Simulation of multi-photon emission isotopes using time-resolved SimSET multiple photon history generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun

    Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less

  11. Generating Ground Reference Data for a Global Impervious Surface Survey

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; De Colstoun, Eric Brown; Wolfe, Robert E.; Tan, Bin; Huang, Chengquan

    2012-01-01

    We are developing an approach for generating ground reference data in support of a project to produce a 30m impervious cover data set of the entire Earth for the years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. Since sufficient ground reference data for training and validation is not available from ground surveys, we are developing an interactive tool, called HSegLearn, to facilitate the photo-interpretation of 1 to 2 m spatial resolution imagery data, which we will use to generate the needed ground reference data at 30m. Through the submission of selected region objects and positive or negative examples of impervious surfaces, HSegLearn enables an analyst to automatically select groups of spectrally similar objects from a hierarchical set of image segmentations produced by the HSeg image segmentation program at an appropriate level of segmentation detail, and label these region objects as either impervious or nonimpervious.

  12. An Exploratory Factor Analysis and Construct Validity of the Resident Choice Assessment Scale with Paid Carers of Adults with Intellectual Disabilities and Challenging Behavior in Community Settings

    ERIC Educational Resources Information Center

    Ratti, Victoria; Vickerstaff, Victoria; Crabtree, Jason; Hassiotis, Angela

    2017-01-01

    Introduction: The Resident Choice Assessment Scale (RCAS) is used to assess choice availability for adults with intellectual disabilities (ID). The aim of the study was to explore the factor structure, construct validity, and internal consistency of the measure in community settings to further validate this tool. Method: 108 paid carers of adults…

  13. Procedure-specific assessment tool for flexible pharyngo-laryngoscopy: gathering validity evidence and setting pass-fail standards.

    PubMed

    Melchiors, Jacob; Petersen, K; Todsen, T; Bohr, A; Konge, Lars; von Buchwald, Christian

    2018-06-01

    The attainment of specific identifiable competencies is the primary measure of progress in the modern medical education system. The system, therefore, requires a method for accurately assessing competence to be feasible. Evidence of validity needs to be gathered before an assessment tool can be implemented in the training and assessment of physicians. This evidence of validity must according to the contemporary theory on validity be gathered from specific sources in a structured and rigorous manner. The flexible pharyngo-laryngoscopy (FPL) is central to the otorhinolaryngologist. We aim to evaluate the flexible pharyngo-laryngoscopy assessment tool (FLEXPAT) created in a previous study and to establish a pass-fail level for proficiency. Eighteen physicians with different levels of experience (novices, intermediates, and experienced) were recruited to the study. Each performed an FPL on two patients. These procedures were video recorded, blinded, and assessed by two specialists. The score was expressed as the percentage of a possible max score. Cronbach's α was used to analyze internal consistency of the data, and a generalizability analysis was performed. The scores of the three different groups were explored, and a pass-fail level was determined using the contrasting groups' standard setting method. Internal consistency was strong with a Cronbach's α of 0.86. We found a generalizability coefficient of 0.72 sufficient for moderate stakes assessment. We found a significant difference between the novice and experienced groups (p < 0.001) and strong correlation between experience and score (Pearson's r = 0.75). The pass/fail level was established at 72% of the maximum score. Applying this pass-fail level in the test population resulted in half of the intermediary group receiving a failing score. We gathered validity evidence for the FLEXPAT according to the contemporary framework as described by Messick. Our results support a claim of validity and are

  14. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  15. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  16. Clinical validation of the 50 gene AmpliSeq Cancer Panel V2 for use on a next generation sequencing platform using formalin fixed, paraffin embedded and fine needle aspiration tumour specimens.

    PubMed

    Rathi, Vivek; Wright, Gavin; Constantin, Diana; Chang, Siok; Pham, Huong; Jones, Kerryn; Palios, Atha; Mclachlan, Sue-Anne; Conron, Matthew; McKelvie, Penny; Williams, Richard

    2017-01-01

    The advent of massively parallel sequencing has caused a paradigm shift in the ways cancer is treated, as personalised therapy becomes a reality. More and more laboratories are looking to introduce next generation sequencing (NGS) as a tool for mutational analysis, as this technology has many advantages compared to conventional platforms like Sanger sequencing. In Australia all massively parallel sequencing platforms are still considered in-house in vitro diagnostic tools by the National Association of Testing Authorities (NATA) and a comprehensive analytical validation of all assays, and not just mere verification, is a strict requirement before accreditation can be granted for clinical testing on these platforms. Analytical validation of assays on NGS platforms can prove to be extremely challenging for pathology laboratories. Although there are many affordable and easily accessible NGS instruments available, there are no standardised guidelines as yet for clinical validation of NGS assays. We present an accreditation development procedure that was both comprehensive and applicable in a setting of hospital laboratory for NGS services. This approach may also be applied to other NGS applications in service laboratories. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.

  17. Atmospheric correction at AERONET locations: A new science and validation data set

    USGS Publications Warehouse

    Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.

    2009-01-01

    This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo

  18. Utility of the MMPI-2-RF (Restructured Form) Validity Scales in Detecting Malingering in a Criminal Forensic Setting: A Known-Groups Design

    ERIC Educational Resources Information Center

    Sellbom, Martin; Toomey, Joseph A.; Wygant, Dustin B.; Kucharski, L. Thomas; Duncan, Scott

    2010-01-01

    The current study examined the utility of the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) validity scales to detect feigned psychopathology in a criminal forensic setting. We used a known-groups design with the Structured Interview of Reported Symptoms (SIRS;…

  19. Generative diffeomorphic modelling of large MRI data sets for probabilistic template construction.

    PubMed

    Blaiotta, Claudia; Freund, Patrick; Cardoso, M Jorge; Ashburner, John

    2018-02-01

    In this paper we present a hierarchical generative model of medical image data, which can capture simultaneously the variability of both signal intensity and anatomical shapes across large populations. Such a model has a direct application for learning average-shaped probabilistic tissue templates in a fully automated manner. While in principle the generality of the proposed Bayesian approach makes it suitable to address a wide range of medical image computing problems, our work focuses primarily on neuroimaging applications. In particular we validate the proposed method on both real and synthetic brain MR scans including the cervical cord and demonstrate that it yields accurate alignment of brain and spinal cord structures, as compared to state-of-the-art tools for medical image registration. At the same time we illustrate how the resulting tissue probability maps can readily be used to segment, bias correct and spatially normalise unseen data, which are all crucial pre-processing steps for MR imaging studies. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Validating Innovative Renewable Energy Technologies: ESTCP Demonstrations at Two DoD Facilities

    DTIC Science & Technology

    2011-11-01

    4. TITLE AND SUBTITLE Validating Innovative Renewable Energy Technologies: ESTCP Demonstrations at Two DoD Facilities 5a. CONTRACT NUMBER 5b...goals of 25% of energy consumed required to be from renewable energy by 2025, the DoD has set aggressive, yet achievable targets. With its array of land...holdings facilities, and environments, the potential for renewable energy generation on DoD lands is great. Reaching these goals will require

  1. Implementing the Science Assessment Standards: Developing and validating a set of laboratory assessment tasks in high school biology

    NASA Astrophysics Data System (ADS)

    Saha, Gouranga Chandra

    Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.

  2. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  3. Evaluation of multiple hydraulic models in generating design/near-real time flood inundation extents under various geophysical settings

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Rajib, M. A.; Jafarzadegan, K.; Merwade, V.

    2015-12-01

    Application of land surface/hydrologic models within an operational flood forecasting system can provide probable time of occurrence and magnitude of streamflow at specific locations along a stream. Creating time-varying spatial extent of flood inundation and depth requires the use of a hydraulic or hydrodynamic model. Models differ in representing river geometry and surface roughness which can lead to different output depending on the particular model being used. The result from a single hydraulic model provides just one possible realization of the flood extent without capturing the uncertainty associated with the input or the model parameters. The objective of this study is to compare multiple hydraulic models toward generating ensemble flood inundation extents. Specifically, relative performances of four hydraulic models, including AutoRoute, HEC-RAS, HEC-RAS 2D, and LISFLOOD are evaluated under different geophysical conditions in several locations across the United States. By using streamflow output from the same hydrologic model (SWAT in this case), hydraulic simulations are conducted for three configurations: (i) hindcasting mode by using past observed weather data at daily time scale in which models are being calibrated against USGS streamflow observations, (ii) validation mode using near real-time weather data at sub-daily time scale, and (iii) design mode with extreme streamflow data having specific return periods. Model generated inundation maps for observed flood events both from hindcasting and validation modes are compared with remotely sensed images, whereas the design mode outcomes are compared with corresponding FEMA generated flood hazard maps. The comparisons presented here will give insights on probable model-specific nature of biases and their relative advantages/disadvantages as components of an operational flood forecasting system.

  4. Multiyear Plan for Validation of EnergyPlus Multi-Zone HVAC System Modeling using ORNL's Flexible Research Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan

    This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less

  5. The Abbott RealTime High Risk HPV test is a clinically validated human papillomavirus assay for triage in the referral population and use in primary cervical cancer screening in women 30 years and older: a review of validation studies.

    PubMed

    Poljak, Mario; Oštrbenk, Anja

    2013-01-01

    Human papillomavirus (HPV) testing has become an essential part of current clinical practice in the management of cervical cancer and precancerous lesions. We reviewed the most important validation studies of a next-generation real-time polymerase chain reaction-based assay, the RealTime High Risk HPV test (RealTime)(Abbott Molecular, Des Plaines, IL, USA), for triage in referral population settings and for use in primary cervical cancer screening in women 30 years and older published in peer-reviewed journals from 2009 to 2013. RealTime is designed to detect 14 high-risk HPV genotypes with concurrent distinction of HPV-16 and HPV-18 from 12 other HPV genotypes. The test was launched on the European market in January 2009 and is currently used in many laboratories worldwide for routine detection of HPV. We concisely reviewed validation studies of a next-generation real-time polymerase chain reaction (PCR)-based assay: the Abbott RealTime High Risk HPV test. Eight validation studies of RealTime in referral settings showed its consistently high absolute clinical sensitivity for both CIN2+ (range 88.3-100%) and CIN3+ (range 93.0-100%), as well as comparative clinical sensitivity relative to the currently most widely used HPV test: the Qiagen/Digene Hybrid Capture 2 HPV DNA Test (HC2). Due to the significantly different composition of the referral populations, RealTime absolute clinical specificity for CIN2+ and CIN3+ varied greatly across studies, but was comparable relative to HC2. Four validation studies of RealTime performance in cervical cancer screening settings showed its consistently high absolute clinical sensitivity for both CIN2+ and CIN3+, as well as comparative clinical sensitivity and specificity relative to HC2 and GP5+/6+ PCR. RealTime has been extensively evaluated in the last 4 years. RealTime can be considered clinically validated for triage in referral population settings and for use in primary cervical cancer screening in women 30 years and older.

  6. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect

  7. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  8. NASA TLA workload analysis support. Volume 2: Metering and spacing studies validation data

    NASA Technical Reports Server (NTRS)

    Sundstrom, J. L.

    1980-01-01

    Four sets of graphic reports--one for each of the metering and spacing scenarios--are presented. The complete data file from which the reports were generated is also given. The data was used to validate the detail task of both the pilot and copilot for four metering and spacing scenarios. The output presents two measures of demand workload and a report showing task length and task interaction.

  9. The validity of visual acuity assessment using mobile technology devices in the primary care setting.

    PubMed

    O'Neill, Samuel; McAndrew, Darryl J

    2016-04-01

    The assessment of visual acuity is indicated in a number of clinical circumstances. It is commonly conducted through the use of a Snellen wall chart. Mobile technology developments and adoption rates by clinicians may potentially provide more convenient methods of assessing visual acuity. Limited data exist on the validity of these devices and applications. The objective of this study was to evaluate the assessment of distance visual acuity using mobile technology devices against the commonly used 3-metre Snellen chart in a primary care setting. A prospective quantitative comparative study was conducted at a regional medical practice. The visual acuity of 60 participants was assessed on a Snellen wall chart and two mobile technology devices (iPhone, iPad). Visual acuity intervals were converted to logarithm of minimum angle of resolution (logMAR) scores and subjected to intraclass correlation coefficient (ICC) assessment. The results show a high level of general agreement between testing modality (ICC 0.917 with a 95% confidence interval of 0.887-0.940). The high level of agreement of visual acuity results between the Snellen wall chart and both mobile technology devices suggests that clinicians can use this technology with confidence in the primary care setting.

  10. The medline UK filter: development and validation of a geographic search filter to retrieve research about the UK from OVID medline.

    PubMed

    Ayiku, Lynda; Levay, Paul; Hudson, Tom; Craven, Jenny; Barrett, Elizabeth; Finnegan, Amy; Adams, Rachel

    2017-07-13

    A validated geographic search filter for the retrieval of research about the United Kingdom (UK) from bibliographic databases had not previously been published. To develop and validate a geographic search filter to retrieve research about the UK from OVID medline with high recall and precision. Three gold standard sets of references were generated using the relative recall method. The sets contained references to studies about the UK which had informed National Institute for Health and Care Excellence (NICE) guidance. The first and second sets were used to develop and refine the medline UK filter. The third set was used to validate the filter. Recall, precision and number-needed-to-read (NNR) were calculated using a case study. The validated medline UK filter demonstrated 87.6% relative recall against the third gold standard set. In the case study, the medline UK filter demonstrated 100% recall, 11.4% precision and a NNR of nine. A validated geographic search filter to retrieve research about the UK with high recall and precision has been developed. The medline UK filter can be applied to systematic literature searches in OVID medline for topics with a UK focus. © 2017 Crown copyright. Health Information and Libraries Journal © 2017 Health Libraries GroupThis article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  11. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol

    PubMed Central

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O’Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-01-01

    Introduction Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Methods and analysis Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. Ethics and dissemination The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for

  12. A reference data set of hillslope rainfall-runoff response, Panola Mountain Research Watershed, United States

    USGS Publications Warehouse

    Tromp-van, Meerveld; James, A.L.; McDonnell, Jeffery J.; Peters, N.E.

    2008-01-01

    Although many hillslope hydrologic investigations have been conducted in different climate, topographic, and geologic settings, subsurface stormflow remains a poorly characterized runoff process. Few, if any, of the existing data sets from these hillslope investigations are available for use by the scientific community for model development and validation or conceptualization of subsurface stormflow. We present a high-resolution spatial and temporal rainfall-runoff data set generated from the Panola Mountain Research Watershed trenched experimental hillslope. The data set includes surface and subsurface (bedrock surface) topographic information and time series of lateral subsurface flow at the trench, rainfall, and subsurface moisture content (distributed soil moisture content and groundwater levels) from January to June 2002. Copyright 2008 by the American Geophysical Union.

  13. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  14. Use of the Environment and Policy Evaluation and Observation as a Self-Report Instrument (EPAO-SR) to measure nutrition and physical activity environments in child care settings: validity and reliability evidence.

    PubMed

    Ward, Dianne S; Mazzucca, Stephanie; McWilliams, Christina; Hales, Derek

    2015-09-26

    Early care and education (ECE) centers are important settings influencing young children's diet and physical activity (PA) behaviors. To better understand their impact on diet and PA behaviors as well as to evaluate public health programs aimed at ECE settings, we developed and tested the Environment and Policy Assessment and Observation - Self-Report (EPAO-SR), a self-administered version of the previously validated, researcher-administered EPAO. Development of the EPAO-SR instrument included modification of items from the EPAO, community advisory group and expert review, and cognitive interviews with center directors and classroom teachers. Reliability and validity data were collected across 4 days in 3-5 year old classrooms in 50 ECE centers in North Carolina. Center teachers and directors completed relevant portions of the EPAO-SR on multiple days according to a standardized protocol, and trained data collectors completed the EPAO for 4 days in the centers. Reliability and validity statistics calculated included percent agreement, kappa, correlation coefficients, coefficients of variation, deviations, mean differences, and intraclass correlation coefficients (ICC), depending on the response option of the item. Data demonstrated a range of reliability and validity evidence for the EPAO-SR instrument. Reporting from directors and classroom teachers was consistent and similar to the observational data. Items that produced strongest reliability and validity estimates included beverages served, outside time, and physical activity equipment, while items such as whole grains served and amount of teacher-led PA had lower reliability (observation and self-report) and validity estimates. To overcome lower reliability and validity estimates, some items need administration on multiple days. This study demonstrated appropriate reliability and validity evidence for use of the EPAO-SR in the field. The self-administered EPAO-SR is an advancement of the measurement of ECE

  15. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review.

    PubMed

    Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen

    2018-02-15

    This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.

  16. Developing an assessment of fire-setting to guide treatment in secure settings: the St Andrew's Fire and Arson Risk Instrument (SAFARI).

    PubMed

    Long, Clive G; Banyard, Ellen; Fulton, Barbara; Hollin, Clive R

    2014-09-01

    Arson and fire-setting are highly prevalent among patients in secure psychiatric settings but there is an absence of valid and reliable assessment instruments and no evidence of a significant approach to intervention. To develop a semi-structured interview assessment specifically for fire-setting to augment structured assessments of risk and need. The extant literature was used to frame interview questions relating to the antecedents, behaviour and consequences necessary to formulate a functional analysis. Questions also covered readiness to change, fire-setting self-efficacy, the probability of future fire-setting, barriers to change, and understanding of fire-setting behaviour. The assessment concludes with indications for assessment and a treatment action plan. The inventory was piloted with a sample of women in secure care and was assessed for comprehensibility, reliability and validity. Staff rated the St Andrews Fire and Risk Instrument (SAFARI) as acceptable to patients and easy to administer. SAFARI was found to be comprehensible by over 95% of the general population, to have good acceptance, high internal reliability, substantial test-retest reliability and validity. SAFARI helps to provide a clear explanation of fire-setting in terms of the complex interplay of antecedents and consequences and facilitates the design of an individually tailored treatment programme in sympathy with a cognitive-behavioural approach. Further studies are needed to verify the reliability and validity of SAFARI with male populations and across settings.

  17. Validation of the regional climate model MAR over the CORDEX Africa domain and comparison with other regional models using unpublished data set

    NASA Astrophysics Data System (ADS)

    Prignon, Maxime; Agosta, Cécile; Kittel, Christoph; Fettweis, Xavier; Michel, Erpicum

    2016-04-01

    In the framework of the CORDEX project, we have applied the regional model MAR over the Africa domain at a resolution of 50 km. ERA-Interim and NCEP-NCAR reanalysis have been used as 6 hourly forcing at the MAR boundaries over 1950-2015. While MAR was already been validated over the West Africa, it is the first time that MAR simulations are carried out at the scale of the whole continent. Unpublished daily measurements, covering the Sahel and more areas up South, with a large set of variables, are used as validation of MAR, other CORDEX-Africa RCMs and both reanalyses. Comparisons with the CRU and the ECA&D databases are also performed. The unpublished daily data set covers the period 1884-2006 and comes from 1460 stations. The measured variables are wind, evapotranspiration, relative humidity, insolation, rain, surface pressure, temperature, vapour pressure and visibility. It covers 23 countries: Algeria, Benin, Burkina, Canary Islands, Cap Verde, Central Africa, Chad, Congo, Ivory Coast, Gabon, Gambia, Ghana, Guinea, Guinea-Bissau, Mali, Mauritania, Morocco, Niger, Nigeria, Senegal, Sudan and Togo.

  18. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  19. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  20. Validation of the comprehensive ICF core sets for diabetes mellitus:a Malaysian perspective.

    PubMed

    Abdullah, Mohd Faudzi; Nor, Norsiah Mohd; Mohd Ali, Siti Zubaidah; Ismail Bukhary, Norizzati Bukhary; Amat, Azlin; Latif, Lydia Abdul; Hasnan, Nazirah; Omar, Zaliha

    2011-04-01

    Diabetes mellitus (DM) is a chronic disease that is prevalent in many countries. The prevalence of DM is on the rise, and its complications pose a heavy burden on the healthcare systems and on the patients' quality of life worldwide. This is a multicentre, cross-sectional study involving 5 Health Clinics conducted by Family Medicine Specialists in Malaysia. Convenience sampling of 100 respondents with DM were selected. The International Classifi cation of Functioning, Disability and Health (ICF) based measures were collected using the Comprehensive Core Set for DM. SF-36 and self-administered forms and comorbidity questionnaire (SCQ) were also used. Ninety-seven percent had Type 2 DM and 3% had Type 1 DM. The mean period of having DM was 6 years. Body functions related to physical health including exercise tolerance (b455), general physical endurance (b4550), aerobic capacity (b4551) and fatiguability (b4552) were the most affected. For body structures, the structure of pancreas (s550) was the most affected. In the ICF component of activities and participation, limitation in sports (d9201) was the highest most affected followed by driving (d475), intimate relationships (d770), handling stress and other psychological demands (d240) and moving around (d455). Only 7% (e355 and e450) in the environmental category were documented as being a relevant factor by more than 90% of the patients. The content validity of the comprehensive ICF Core set DM for Malaysian population were identified and the results show that physical and mental functioning were impaired in contrast to what the respondents perceived as leading healthy lifestyles.

  1. IDG - INTERACTIVE DIF GENERATOR

    NASA Technical Reports Server (NTRS)

    Preheim, L. E.

    1994-01-01

    The Interactive DIF Generator (IDG) utility is a tool used to generate and manipulate Directory Interchange Format files (DIF). Its purpose as a specialized text editor is to create and update DIF files which can be sent to NASA's Master Directory, also referred to as the International Global Change Directory at Goddard. Many government and university data systems use the Master Directory to advertise the availability of research data. The IDG interface consists of a set of four windows: (1) the IDG main window; (2) a text editing window; (3) a text formatting and validation window; and (4) a file viewing window. The IDG main window starts up the other windows and contains a list of valid keywords. The keywords are loaded from a user-designated file and selected keywords can be copied into any active editing window. Once activated, the editing window designates the file to be edited. Upon switching from the editing window to the formatting and validation window, the user has options for making simple changes to one or more files such as inserting tabs, aligning fields, and indenting groups. The viewing window is a scrollable read-only window that allows fast viewing of any text file. IDG is an interactive tool and requires a mouse or a trackball to operate. IDG uses the X Window System to build and manage its interactive forms, and also uses the Motif widget set and runs under Sun UNIX. IDG is written in C-language for Sun computers running SunOS. This package requires the X Window System, Version 11 Revision 4, with OSF/Motif 1.1. IDG requires 1.8Mb of hard disk space. The standard distribution medium for IDG is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The program was developed in 1991 and is a copyrighted work with all copyright vested in NASA. SunOS is a trademark of Sun Microsystems, Inc. X Window System is a trademark of Massachusetts Institute of Technology. OSF/Motif is a

  2. Validation of the Gratitude Questionnaire in Filipino Secondary School Students.

    PubMed

    Valdez, Jana Patricia M; Yang, Weipeng; Datu, Jesus Alfonso D

    2017-10-11

    Most studies have assessed the psychometric properties of the Gratitude Questionnaire - Six-Item Form (GQ-6) in the Western contexts while very few research has been generated to explore the applicability of this scale in non-Western settings. To address this gap, the aim of the study was to examine the factorial validity and gender invariance of the Gratitude Questionnaire in the Philippines through a construct validation approach. There were 383 Filipino high school students who participated in the research. In terms of within-network construct validity, results of confirmatory factor analyses revealed that the five-item version of the questionnaire (GQ-5) had better fit compared to the original six-item version of the gratitude questionnaire. The scores from the GQ-5 also exhibited invariance across gender. Between-network construct validation showed that gratitude was associated with higher levels of academic achievement (β = .46, p <.001), autonomous motivation (β = .73, p <.001), and controlled motivation (β = .28, p <.01). Conversely, gratitude was linked to lower degree of amotivation (β = -.51, p <.001). Theoretical and practical implications are discussed.

  3. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation

    PubMed Central

    2011-01-01

    terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions. PMID:21635749

  4. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation.

    PubMed

    Jimeno-Yepes, Antonio J; McInnes, Bridget T; Aronson, Alan R

    2011-06-02

    set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.

  5. Discovery and Validation of Novel Expression Signature for Postcystectomy Recurrence in High-Risk Bladder Cancer

    PubMed Central

    Lam, Lucia L.; Ghadessi, Mercedeh; Erho, Nicholas; Vergara, Ismael A.; Alshalalfa, Mohammed; Buerki, Christine; Haddad, Zaid; Sierocinski, Thomas; Triche, Timothy J.; Skinner, Eila C.; Davicioni, Elai; Daneshmand, Siamak; Black, Peter C.

    2014-01-01

    Background Nearly half of muscle-invasive bladder cancer patients succumb to their disease following cystectomy. Selecting candidates for adjuvant therapy is currently based on clinical parameters with limited predictive power. This study aimed to develop and validate genomic-based signatures that can better identify patients at risk for recurrence than clinical models alone. Methods Transcriptome-wide expression profiles were generated using 1.4 million feature-arrays on archival tumors from 225 patients who underwent radical cystectomy and had muscle-invasive and/or node-positive bladder cancer. Genomic (GC) and clinical (CC) classifiers for predicting recurrence were developed on a discovery set (n = 133). Performances of GC, CC, an independent clinical nomogram (IBCNC), and genomic-clinicopathologic classifiers (G-CC, G-IBCNC) were assessed in the discovery and independent validation (n = 66) sets. GC was further validated on four external datasets (n = 341). Discrimination and prognostic abilities of classifiers were compared using area under receiver-operating characteristic curves (AUCs). All statistical tests were two-sided. Results A 15-feature GC was developed on the discovery set with area under curve (AUC) of 0.77 in the validation set. This was higher than individual clinical variables, IBCNC (AUC = 0.73), and comparable to CC (AUC = 0.78). Performance was improved upon combining GC with clinical nomograms (G-IBCNC, AUC = 0.82; G-CC, AUC = 0.86). G-CC high-risk patients had elevated recurrence probabilities (P < .001), with GC being the best predictor by multivariable analysis (P = .005). Genomic-clinicopathologic classifiers outperformed clinical nomograms by decision curve and reclassification analyses. GC performed the best in validation compared with seven prior signatures. GC markers remained prognostic across four independent datasets. Conclusions The validated genomic-based classifiers outperform clinical models for predicting postcystectomy

  6. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  7. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  8. Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging.

    PubMed

    Hong, Keehoon; Hong, Jisoo; Jung, Jae-Hyun; Park, Jae-Hyeung; Lee, Byoungho

    2010-05-24

    We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.

  9. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  10. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.

  11. The Chemical Validation and Standardization Platform (CVSP): large-scale automated validation of chemical structure datasets.

    PubMed

    Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J

    2015-01-01

    There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set

  12. Introduction to Bayesian statistical approaches to compositional analyses of transgenic crops 1. Model validation and setting the stage.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Harrigan, George G

    2011-08-01

    Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  14. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  15. Nematode.net update 2011: addition of data sets and tools featuring next-generation sequencing data

    PubMed Central

    Martin, John; Abubucker, Sahar; Heizer, Esley; Taylor, Christina M.; Mitreva, Makedonka

    2012-01-01

    Nematode.net (http://nematode.net) has been a publicly available resource for studying nematodes for over a decade. In the past 3 years, we reorganized Nematode.net to provide more user-friendly navigation through the site, a necessity due to the explosion of data from next-generation sequencing platforms. Organism-centric portals containing dynamically generated data are available for over 56 different nematode species. Next-generation data has been added to the various data-mining portals hosted, including NemaBLAST and NemaBrowse. The NemaPath metabolic pathway viewer builds associations using KOs, rather than ECs to provide more accurate and fine-grained descriptions of proteins. Two new features for data analysis and comparative genomics have been added to the site. NemaSNP enables the user to perform population genetics studies in various nematode populations using next-generation sequencing data. HelmCoP (Helminth Control and Prevention) as an independent component of Nematode.net provides an integrated resource for storage, annotation and comparative genomics of helminth genomes to aid in learning more about nematode genomes, as well as drug, pesticide, vaccine and drug target discovery. With this update, Nematode.net will continue to realize its original goal to disseminate diverse bioinformatic data sets and provide analysis tools to the broad scientific community in a useful and user-friendly manner. PMID:22139919

  16. Multisensor system for toxic gases detection generated on indoor environments

    NASA Astrophysics Data System (ADS)

    Durán, C. M.; Monsalve, P. A. G.; Mosquera, C. J.

    2016-11-01

    This work describes a wireless multisensory system for different toxic gases detection generated on indoor environments (i.e., Underground coal mines, etc.). The artificial multisensory system proposed in this study was developed through a set of six chemical gas sensors (MQ) of low cost with overlapping sensitivities to detect hazardous gases in the air. A statistical parameter was implemented to the data set and two pattern recognition methods such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA) were used for feature selection. The toxic gases categories were classified with a Probabilistic Neural Network (PNN) in order to validate the results previously obtained. The tests were carried out to verify feasibility of the application through a wireless communication model which allowed to monitor and store the information of the sensor signals for the appropriate analysis. The success rate in the measures discrimination was 100%, using an artificial neural network where leave-one-out was used as cross validation method.

  17. Analysis of genetic association using hierarchical clustering and cluster validation indices.

    PubMed

    Pagnuco, Inti A; Pastore, Juan I; Abras, Guillermo; Brun, Marcel; Ballarin, Virginia L

    2017-10-01

    It is usually assumed that co-expressed genes suggest co-regulation in the underlying regulatory network. Determining sets of co-expressed genes is an important task, based on some criteria of similarity. This task is usually performed by clustering algorithms, where the genes are clustered into meaningful groups based on their expression values in a set of experiment. In this work, we propose a method to find sets of co-expressed genes, based on cluster validation indices as a measure of similarity for individual gene groups, and a combination of variants of hierarchical clustering to generate the candidate groups. We evaluated its ability to retrieve significant sets on simulated correlated and real genomics data, where the performance is measured based on its detection ability of co-regulated sets against a full search. Additionally, we analyzed the quality of the best ranked groups using an online bioinformatics tool that provides network information for the selected genes. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Validation Data and Model Development for Fuel Assembly Response to Seismic Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardet, Philippe; Ricciardi, Guillaume

    2016-01-31

    Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less

  19. Global precipitation measurements for validating climate models

    NASA Astrophysics Data System (ADS)

    Tapiador, F. J.; Navarro, A.; Levizzani, V.; García-Ortega, E.; Huffman, G. J.; Kidd, C.; Kucera, P. A.; Kummerow, C. D.; Masunaga, H.; Petersen, W. A.; Roca, R.; Sánchez, J.-L.; Tao, W.-K.; Turk, F. J.

    2017-11-01

    The advent of global precipitation data sets with increasing temporal span has made it possible to use them for validating climate models. In order to fulfill the requirement of global coverage, existing products integrate satellite-derived retrievals from many sensors with direct ground observations (gauges, disdrometers, radars), which are used as reference for the satellites. While the resulting product can be deemed as the best-available source of quality validation data, awareness of the limitations of such data sets is important to avoid extracting wrong or unsubstantiated conclusions when assessing climate model abilities. This paper provides guidance on the use of precipitation data sets for climate research, including model validation and verification for improving physical parameterizations. The strengths and limitations of the data sets for climate modeling applications are presented, and a protocol for quality assurance of both observational databases and models is discussed. The paper helps elaborating the recent IPCC AR5 acknowledgment of large observational uncertainties in precipitation observations for climate model validation.

  20. Brazilian validation of the Alberta Infant Motor Scale.

    PubMed

    Valentini, Nadia Cristina; Saccani, Raquel

    2012-03-01

    The Alberta Infant Motor Scale (AIMS) is a well-known motor assessment tool used to identify potential delays in infants' motor development. Although Brazilian researchers and practitioners have used the AIMS in laboratories and clinical settings, its translation to Portuguese and validation for the Brazilian population is yet to be investigated. This study aimed to translate and validate all AIMS items with respect to internal consistency and content, criterion, and construct validity. A cross-sectional and longitudinal design was used. A cross-cultural translation was used to generate a Brazilian-Portuguese version of the AIMS. In addition, a validation process was conducted involving 22 professionals and 766 Brazilian infants (aged 0-18 months). The results demonstrated language clarity and internal consistency for the motor criteria (motor development score, α=.90; prone, α=.85; supine, α=.92; sitting, α=.84; and standing, α=.86). The analysis also revealed high discriminative power to identify typical and atypical development (motor development score, P<.001; percentile, P=.04; classification criterion, χ(2)=6.03; P=.05). Temporal stability (P=.07) (rho=.85, P<.001) was observed, and predictive power (P<.001) was limited to the group of infants aged from 3 months to 9 months. Limited predictive validity was observed, which may have been due to the restricted time that the groups were followed longitudinally. In sum, the translated version of AIMS presented adequate validity and reliability.

  1. Engineering Software Suite Validates System Design

    NASA Technical Reports Server (NTRS)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  2. Development and Validation of a Unidimensional Maltreatment Scale in the Add Health Data Set

    ERIC Educational Resources Information Center

    Marszalek, Jacob M.; Hamilton, Jessica L.

    2012-01-01

    Four maltreatment items were examined from Wave III (N = 13,516) of the National Longitudinal Study of Adolescent Health. Item analysis, confirmatory factor analysis, cross-validation, reliability estimates, and convergent validity coefficients strongly supported the validity of using the four items as a unidimensional composite. Implications for…

  3. Research and Improvement on Characteristics of Emergency Diesel Generating Set Mechanical Support System in Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Zhe, Yang

    2017-06-01

    There are often mechanical problems of emergency power generation units in nuclear power plant, which bring a great threat to nuclear safety. Through analyzing the influence factors caused by mechanical failure, the existing defects of the design of mechanical support system are determined, and the design idea has caused the direction misleading in the field of maintenance and transformation. In this paper, research analysis is made on basic support design of diesel generator set, main pipe support design and important components of supercharger support design. And this paper points out the specific design flaws and shortcomings, and proposes targeted improvement program. Through the implementation of improvement programs, vibration level of unit and mechanical failure rate are reduced effectively. At the same time, it also provides guidance for design, maintenance and renovation of diesel generator mechanical support system of nuclear power plants in the future.

  4. Physiological responses and external validity of a new setting for taekwondo combat simulation.

    PubMed

    Hausen, Matheus; Soares, Pedro Paulo; Araújo, Marcus Paulo; Porto, Flávia; Franchini, Emerson; Bridge, Craig Alan; Gurgel, Jonas

    2017-01-01

    Combat simulations have served as an alternative framework to study the cardiorespiratory demands of the activity in combat sports, but this setting imposes rule-restrictions that may compromise the competitiveness of the bouts. The aim of this study was to assess the cardiorespiratory responses to a full-contact taekwondo combat simulation using a safe and externally valid competitive setting. Twelve male national level taekwondo athletes visited the laboratory on two separate occasions. On the first visit, anthropometric and running cardiopulmonary exercise assessments were performed. In the following two to seven days, participants performed a full-contact combat simulation, using a specifically designed gas analyser protector. Oxygen uptake ([Formula: see text]), heart rate (HR) and capillary blood lactate measurements ([La-]) were obtained. Time-motion analysis was performed to compare activity profile. The simulation yielded broadly comparable activity profiles to those performed in competition, a mean [Formula: see text] of 36.6 ± 3.9 ml.kg-1.min-1 (73 ± 6% [Formula: see text]) and mean HR of 177 ± 10 beats.min-1 (93 ± 5% HRPEAK). A peak [Formula: see text] of 44.8 ± 5.0 ml.kg-1.min-1 (89 ± 5% [Formula: see text]), a peak heart rate of 190 ± 13 beats.min-1 (98 ± 3% HRmax) and peak [La-] of 12.3 ± 2.9 mmol.L-1 was elicited by the bouts. Regarding time-motion analysis, combat simulation presented a similar exchange time, a shorter preparation time and a longer exchange-preparation ratio. Taekwondo combats capturing the full-contact competitive elements of a bout elicit moderate to high cardiorespiratory demands on the competitors. These data are valuable to assist preparatory strategies within the sport.

  5. Physiological responses and external validity of a new setting for taekwondo combat simulation

    PubMed Central

    2017-01-01

    Combat simulations have served as an alternative framework to study the cardiorespiratory demands of the activity in combat sports, but this setting imposes rule-restrictions that may compromise the competitiveness of the bouts. The aim of this study was to assess the cardiorespiratory responses to a full-contact taekwondo combat simulation using a safe and externally valid competitive setting. Twelve male national level taekwondo athletes visited the laboratory on two separate occasions. On the first visit, anthropometric and running cardiopulmonary exercise assessments were performed. In the following two to seven days, participants performed a full-contact combat simulation, using a specifically designed gas analyser protector. Oxygen uptake (V˙O2), heart rate (HR) and capillary blood lactate measurements ([La-]) were obtained. Time-motion analysis was performed to compare activity profile. The simulation yielded broadly comparable activity profiles to those performed in competition, a mean V˙O2 of 36.6 ± 3.9 ml.kg-1.min-1 (73 ± 6% V˙O2PEAK) and mean HR of 177 ± 10 beats.min-1 (93 ± 5% HRPEAK). A peak V˙O2 of 44.8 ± 5.0 ml.kg-1.min-1 (89 ± 5% V˙O2PEAK), a peak heart rate of 190 ± 13 beats.min-1 (98 ± 3% HRmax) and peak [La-] of 12.3 ± 2.9 mmol.L–1 was elicited by the bouts. Regarding time-motion analysis, combat simulation presented a similar exchange time, a shorter preparation time and a longer exchange-preparation ratio. Taekwondo combats capturing the full-contact competitive elements of a bout elicit moderate to high cardiorespiratory demands on the competitors. These data are valuable to assist preparatory strategies within the sport. PMID:28158252

  6. Absolute, pressure-dependent validation of a calibration-free, airborne laser hygrometer transfer standard (SEALDH-II) from 5 to 1200 ppmv using a metrological humidity generator

    NASA Astrophysics Data System (ADS)

    Buchholz, Bernhard; Ebert, Volker

    2018-01-01

    Highly accurate water vapor measurements are indispensable for understanding a variety of scientific questions as well as industrial processes. While in metrology water vapor concentrations can be defined, generated, and measured with relative uncertainties in the single percentage range, field-deployable airborne instruments deviate even under quasistatic laboratory conditions up to 10-20 %. The novel SEALDH-II hygrometer, a calibration-free, tuneable diode laser spectrometer, bridges this gap by implementing a new holistic concept to achieve higher accuracy levels in the field. We present in this paper the absolute validation of SEALDH-II at a traceable humidity generator during 23 days of permanent operation at 15 different H2O mole fraction levels between 5 and 1200 ppmv. At each mole fraction level, we studied the pressure dependence at six different gas pressures between 65 and 950 hPa. Further, we describe the setup for this metrological validation, the challenges to overcome when assessing water vapor measurements on a high accuracy level, and the comparison results. With this validation, SEALDH-II is the first airborne, metrologically validated humidity transfer standard which links several scientific airborne and laboratory measurement campaigns to the international metrological water vapor scale.

  7. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol.

    PubMed

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O'Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-03-17

    Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for patients and public. ISRCTN90752212. © Article author

  8. Validity of physical activity and cardiorespiratory fitness in the Danish cohort "Diet, Cancer and Health-Next Generations".

    PubMed

    Lerche, L; Olsen, A; Petersen, K E N; Rostgaard-Hansen, A L; Dragsted, L O; Nordsborg, N B; Tjønneland, A; Halkjaer, J

    2017-12-01

    Valid assessments of physical activity (PA) and cardiorespiratory fitness (CRF) are essential in epidemiological studies to define dose-response relationship for formulating thorough recommendations of an appropriate pattern of PA to maintain good health. The aim of this study was to validate the Danish step test, the physical activity questionnaire Active-Q, and self-rated fitness against directly measured maximal oxygen uptake (VO 2 max). A population-based subsample (n=125) was included from the "Diet, Cancer and Health-Next Generations" (DCH-NG) cohort which is under establishment. Validity coefficients, which express the correlation between measured and "true" exposure, were calculated, and misclassification across categories was evaluated. The validity of the Danish step test was moderate (women: r=.66, and men: r=.56); however, men were systematically underestimated (43% misclassification). When validating the questionnaire-derived measures of PA, leisure-time physical activity was not correlated with VO 2 max. Positive correlations were found for sports overall, but these were only significant for men: total hours per week of sports (r=.26), MET-hours per week of sports (r=.28) and vigorous sports (0.28) alone were positively correlated with VO 2 max. Finally, the percentage of misclassification was low for self-rated fitness (women: 9% and men: 13%). Thus, self-rated fitness was found to be a superior method to the Danish step test, as well as being less cost prohibitive and more practical than the VO 2 max method. Finally, even if correlations were low, they support the potential for questionnaire outcomes, particularly sports, vigorous sports, and self-rated fitness to be used to estimate CRF. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  10. The Flexibility Scale: Development and Preliminary Validation of a Cognitive Flexibility Measure in Children with Autism Spectrum Disorders.

    PubMed

    Strang, John F; Anthony, Laura G; Yerys, Benjamin E; Hardy, Kristina K; Wallace, Gregory L; Armour, Anna C; Dudley, Katerina; Kenworthy, Lauren

    2017-08-01

    Flexibility is a key component of executive function, and is related to everyday functioning and adult outcomes. However, existing informant reports do not densely sample cognitive aspects of flexibility; the Flexibility Scale (FS) was developed to address this gap. This study investigates the validity of the FS in 221 youth with ASD and 57 typically developing children. Exploratory factor analysis indicates a five-factor scale: Routines/rituals, transitions/change, special interests, social flexibility, and generativity. The FS demonstrated convergent and divergent validity with comparative domains of function in other measures, save for the Generativity factor. The FS discriminated participants with ASD and controls. Thus, this study suggests the FS may be a viable, comprehensive measure of flexibility in everyday settings.

  11. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    PubMed

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  12. Exploring validation of a graphic symbol questionnaire to measure participation experiences of youth in activity settings.

    PubMed

    Batorowicz, Beata; King, Gillian; Vane, Freda; Pinto, Madhu; Raghavendra, Parimala

    2017-06-01

    Participation has a subjective and private dimension, and so it is important to hear directly from youth about their experiences in various activity settings, the places where they "do things" and interact with others. To meet this need, our team developed the Self-Reported Experiences of Activity Settings (SEAS) measure, which demonstrated good-to-excellent measurement properties. To address the needs of youth who could benefit from graphic symbol support, the SEAS-PCS TM , 1 was created. The purpose of this paper is to describe the development of SEAS-PCS and the preliminary study that explores the equivalency of the SEAS and SEAS-PCS. The SEAS and SEAS-PCS were compared in terms of the equivalency of meaning of stimulus items by 11 professionals and five adults who used augmentative and alternative communication, were familiar with PCS, and were fluent readers. Out of 22 items, 68% were rated as highly similar on a 5-point scale (M = 4.14; SD = .70; mdn = 4; range: 2.81-5.00). Subsequently, the 32% of the SEAS-PCS items that were rated below 4 were modified based on the participants' specific comments. Further work is required to validate the SEAS-PCS. The next step could involve exploring the views of youth who use AAC.

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Validity of Two WPPSI Short Forms in Outpatient Clinic Settings.

    ERIC Educational Resources Information Center

    Haynes, Jack P.; Atkinson, David

    1983-01-01

    Investigated the validity of subtest short forms for the Wechsler Preschool and Primary Scale of Intelligence in an outpatient population of 116 children. Data showed that the short forms underestimated actual level of intelligence and supported use of a short form only as a brief screening device. (LLL)

  15. Clinical bacteriology in low-resource settings: today's solutions.

    PubMed

    Ombelet, Sien; Ronat, Jean-Baptiste; Walsh, Timothy; Yansouni, Cedric P; Cox, Janneke; Vlieghe, Erika; Martiny, Delphine; Semret, Makeda; Vandenberg, Olivier; Jacobs, Jan

    2018-03-05

    Low-resource settings are disproportionately burdened by infectious diseases and antimicrobial resistance. Good quality clinical bacteriology through a well functioning reference laboratory network is necessary for effective resistance control, but low-resource settings face infrastructural, technical, and behavioural challenges in the implementation of clinical bacteriology. In this Personal View, we explore what constitutes successful implementation of clinical bacteriology in low-resource settings and describe a framework for implementation that is suitable for general referral hospitals in low-income and middle-income countries with a moderate infrastructure. Most microbiological techniques and equipment are not developed for the specific needs of such settings. Pending the arrival of a new generation diagnostics for these settings, we suggest focus on improving, adapting, and implementing conventional, culture-based techniques. Priorities in low-resource settings include harmonised, quality assured, and tropicalised equipment, consumables, and techniques, and rationalised bacterial identification and testing for antimicrobial resistance. Diagnostics should be integrated into clinical care and patient management; clinically relevant specimens must be appropriately selected and prioritised. Open-access training materials and information management tools should be developed. Also important is the need for onsite validation and field adoption of diagnostics in low-resource settings, with considerable shortening of the time between development and implementation of diagnostics. We argue that the implementation of clinical bacteriology in low-resource settings improves patient management, provides valuable surveillance for local antibiotic treatment guidelines and national policies, and supports containment of antimicrobial resistance and the prevention and control of hospital-acquired infections. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Simulating Next-Generation Sequencing Datasets from Empirical Mutation and Sequencing Models

    PubMed Central

    Stephens, Zachary D.; Hudson, Matthew E.; Mainzer, Liudmila S.; Taschuk, Morgan; Weber, Matthew R.; Iyer, Ravishankar K.

    2016-01-01

    An obstacle to validating and benchmarking methods for genome analysis is that there are few reference datasets available for which the “ground truth” about the mutational landscape of the sample genome is known and fully validated. Additionally, the free and public availability of real human genome datasets is incompatible with the preservation of donor privacy. In order to better analyze and understand genomic data, we need test datasets that model all variants, reflecting known biology as well as sequencing artifacts. Read simulators can fulfill this requirement, but are often criticized for limited resemblance to true data and overall inflexibility. We present NEAT (NExt-generation sequencing Analysis Toolkit), a set of tools that not only includes an easy-to-use read simulator, but also scripts to facilitate variant comparison and tool evaluation. NEAT has a wide variety of tunable parameters which can be set manually on the default model or parameterized using real datasets. The software is freely available at github.com/zstephens/neat-genreads. PMID:27893777

  17. Improving Escalation of Care: Development and Validation of the Quality of Information Transfer Tool.

    PubMed

    Johnston, Maximilian J; Arora, Sonal; Pucher, Philip H; Reissis, Yannis; Hull, Louise; Huddy, Jeremy R; King, Dominic; Darzi, Ara

    2016-03-01

    To develop and provide validity and feasibility evidence for the QUality of Information Transfer (QUIT) tool. Prompt escalation of care in the setting of patient deterioration can prevent further harm. Escalation and information transfer skills are not currently measured in surgery. This study comprised 3 phases: the development (phase 1), validation (phase 2), and feasibility analysis (phase 3) of the QUIT tool. Phase 1 involved identification of core skills needed for successful escalation of care through literature review and 33 semistructured interviews with stakeholders. Phase 2 involved the generation of validity evidence for the tool using a simulated setting. Thirty surgeons assessed a deteriorating postoperative patient in a simulated ward and escalated their care to a senior colleague. The face and content validity were assessed using a survey. Construct and concurrent validity of the tool were determined by comparing performance scores using the QUIT tool with those measured using the Situation-Background-Assessment-Recommendation (SBAR) tool. Phase 3 was conducted using direct observation of escalation scenarios on surgical wards in 2 hospitals. A 7-category assessment tool was developed from phase 1 consisting of 24 items. Twenty-one of 24 items had excellent content validity (content validity index >0.8). All 7 categories and 18 of 24 (P < 0.05) items demonstrated construct validity. The correlation between the QUIT and SBAR tools used was strong indicating concurrent validity (r = 0.694, P < 0.001). Real-time scoring of escalation referrals was feasible and indicated that doctors currently have better information transfer skills than nurses when faced with a deteriorating patient. A validated tool to assess information transfer for deteriorating surgical patients was developed and tested using simulation and real-time clinical scenarios. It may improve the quality and safety of patient care on the surgical ward.

  18. THE VALIDITY OF HUMAN AND COMPUTERIZED WRITING ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2005-09-01

    This paper summarizes an experiment designed to assess the validity of essay grading between holistic and analytic human graders and a computerized grader based on latent semantic analysis. The validity of the grade was gauged by the extent to which the student’s knowledge of the topic correlated with the grader’s expert knowledge. To assess knowledge, Pathfinder networks were generated by the student essay writers, the holistic and analytic graders, and the computerized grader. It was found that the computer generated grades more closely matched the definition of valid grading than did human generated grades.

  19. Invalid before impaired: an emerging paradox of embedded validity indicators.

    PubMed

    Erdodi, Laszlo A; Lichtenstein, Jonathan D

    Embedded validity indicators (EVIs) are cost-effective psychometric tools to identify non-credible response sets during neuropsychological testing. As research on EVIs expands, assessors are faced with an emerging contradiction: the range of credible impairment disappears between the 'normal' and 'invalid' range of performance. We labeled this phenomenon as the invalid-before-impaired paradox. This study was designed to explore the origin of this psychometric anomaly, subject it to empirical investigation, and generate potential solutions. Archival data were analyzed from a mixed clinical sample of 312 (M Age  = 45.2; M Education  = 13.6) patients medically referred for neuropsychological assessment. The distribution of scores on eight subtests of the third and fourth editions of Wechsler Adult Intelligence Scale (WAIS) were examined in relation to the standard normal curve and two performance validity tests (PVTs). Although WAIS subtests varied in their sensitivity to non-credible responding, they were all significant predictors of performance validity. While subtests previously identified as EVIs (Digit Span, Coding, and Symbol Search) were comparably effective at differentiating credible and non-credible response sets, their classification accuracy was driven by their base rate of low scores, requiring different cutoffs to achieve comparable specificity. Invalid performance had a global effect on WAIS scores. Genuine impairment and non-credible performance can co-exist, are often intertwined, and may be psychometrically indistinguishable. A compromise between the alpha and beta bias on PVTs based on a balanced, objective evaluation of the evidence that requires concessions from both sides is needed to maintain/restore the credibility of performance validity assessment.

  20. Development of a brief measure of generativity and ego-integrity for use in palliative care settings.

    PubMed

    Vuksanovic, Dean; Dyck, Murray; Green, Heather

    2015-10-01

    Our aim was to develop and test a brief measure of generativity and ego-integrity that is suitable for use in palliative care settings. Two measures of generativity and ego-integrity were modified and combined to create a new 11-item questionnaire, which was then administered to 143 adults. A principal-component analysis with oblique rotation was performed in order to identify underlying components that can best account for variation in the 11 questionnaire items. The two-component solution was consistent with the items that, on conceptual grounds, were intended to comprise the two constructs assessed by the questionnaire. Results suggest that the selected 11 items were good representatives of the larger scales from which they were selected, and they are expected to provide a useful means of measuring these concepts near the end of life.

  1. Prediction and validation of concentration gradient generation in a paper-based microfluidic channel

    NASA Astrophysics Data System (ADS)

    Jang, Ilhoon; Kim, Gang-June; Song, Simon

    2016-11-01

    A paper-based microfluidic channel has obtained attention as a diagnosis device that can implement various chemical or biological reactions. With benefits of thin, flexible, and strong features of paper devices, for example, it is often utilized for cell culture where controlling oxygen, nutrients, metabolism, and signaling molecules gradient affects the growth and movement of the cells. Among various features of paper-based microfluidic devices, we focus on establishment of concentration gradient in a paper channel. The flow is subject to dispersion and capillary effects because a paper is a porous media. In this presentation, we describe facile, fast and accurate method of generating a concentration gradient by using flow mixing of different concentrations. Both theoretical prediction and experimental validation are discussed along with inter-diffusion characteristics of porous flows. This work was supported by the National Research Foundation of Korea(NRF) Grant funded by the Korea government(MSIP) (No. 2016R1A2B3009541).

  2. Generation and validation of a universal perinatal database and biospecimen repository: PeriBank.

    PubMed

    Antony, K M; Hemarajata, P; Chen, J; Morris, J; Cook, C; Masalas, D; Gedminas, M; Brown, A; Versalovic, J; Aagaard, K

    2016-11-01

    There is a dearth of biospecimen repositories available to perinatal researchers. In order to address this need, here we describe the methodology used to establish such a resource. With the collaboration of MedSci.net, we generated an online perinatal database with 847 fields of clinical information. Simultaneously, we established a biospecimen repository of the same clinical participants. The demographic and clinical outcomes data are described for the first 10 000 participants enrolled. The demographic characteristics are consistent with the demographics of the delivery hospitals. Quality analysis of the biospecimens reveals variation in very few analytes. Furthermore, since the creation of PeriBank, we have demonstrated validity of the database and tissue integrity of the biospecimen repository. Here we establish that the creation of a universal perinatal database and biospecimen collection is not only possible, but allows for the performance of state-of-the-science translational perinatal research and is a potentially valuable resource to academic perinatal researchers.

  3. How valid are future generations' arguments for preserving wilderness?

    Treesearch

    Thomas A. More; James R. Averill; Thomas H. Stevens

    2000-01-01

    We are often urged to preserve wilderness for the sake of future generations. Future generations consist of potential persons who are mute stakeholders in the decisions of today. Many claims about the rights of future generations or our present obligations to them have been vigorously advanced and just as vigorously denied. Recent theorists, however, have argued for a...

  4. Methods to validate the accuracy of an indirect calorimeter in the in-vitro setting.

    PubMed

    Oshima, Taku; Ragusa, Marco; Graf, Séverine; Dupertuis, Yves Marc; Heidegger, Claudia-Paula; Pichard, Claude

    2017-12-01

    The international ICALIC initiative aims at developing a new indirect calorimeter according to the needs of the clinicians and researchers in the field of clinical nutrition and metabolism. The project initially focuses on validating the calorimeter for use in mechanically ventilated acutely ill adult patient. However, standard methods to validate the accuracy of calorimeters have not yet been established. This paper describes the procedures for the in-vitro tests to validate the accuracy of the new indirect calorimeter, and defines the ranges for the parameters to be evaluated in each test to optimize the validation for clinical and research calorimetry measurements. Two in-vitro tests have been defined to validate the accuracy of the gas analyzers and the overall function of the new calorimeter. 1) Gas composition analysis allows validating the accuracy of O 2 and CO 2 analyzers. Reference gas of known O 2 (or CO 2 ) concentration is diluted by pure nitrogen gas to achieve predefined O 2 (or CO 2 ) concentration, to be measured by the indirect calorimeter. O 2 and CO 2 concentrations to be tested were determined according to their expected ranges of concentrations during calorimetry measurements. 2) Gas exchange simulator analysis validates O 2 consumption (VO 2 ) and CO 2 production (VCO 2 ) measurements. CO 2 gas injection into artificial breath gas provided by the mechanical ventilator simulates VCO 2 . Resulting dilution of O 2 concentration in the expiratory air is analyzed by the calorimeter as VO 2 . CO 2 gas of identical concentration to the fraction of inspired O 2 (FiO 2 ) is used to simulate identical VO 2 and VCO 2 . Indirect calorimetry results from publications were analyzed to determine the VO 2 and VCO 2 values to be tested for the validation. O 2 concentration in respiratory air is highest at inspiration, and can decrease to 15% during expiration. CO 2 concentration can be as high as 5% in expired air. To validate analyzers for measurements of Fi

  5. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    PubMed

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  6. Geostatistics as a validation tool for setting ozone standards for durum wheat.

    PubMed

    De Marco, Alessandra; Screpanti, Augusto; Paoletti, Elena

    2010-02-01

    Which is the best standard for protecting plants from ozone? To answer this question, we must validate the standards by testing biological responses vs. ambient data in the field. A validation is missing for European and USA standards, because the networks for ozone, meteorology and plant responses are spatially independent. We proposed geostatistics as validation tool, and used durum wheat in central Italy as a test. The standards summarized ozone impact on yield better than hourly averages. Although USA criteria explained ozone-induced yield losses better than European criteria, USA legal level (75 ppb) protected only 39% of sites. European exposure-based standards protected > or =90%. Reducing the USA level to the Canadian 65 ppb or using W126 protected 91% and 97%, respectively. For a no-threshold accumulated stomatal flux, 22 mmol m(-2) was suggested to protect 97% of sites. In a multiple regression, precipitation explained 22% and ozone explained <0.9% of yield variability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  7. Two New Tools for Glycopeptide Analysis Researchers: A Glycopeptide Decoy Generator and a Large Data Set of Assigned CID Spectra of Glycopeptides.

    PubMed

    Lakbub, Jude C; Su, Xiaomeng; Zhu, Zhikai; Patabandige, Milani W; Hua, David; Go, Eden P; Desaire, Heather

    2017-08-04

    The glycopeptide analysis field is tightly constrained by a lack of effective tools that translate mass spectrometry data into meaningful chemical information, and perhaps the most challenging aspect of building effective glycopeptide analysis software is designing an accurate scoring algorithm for MS/MS data. We provide the glycoproteomics community with two tools to address this challenge. The first tool, a curated set of 100 expert-assigned CID spectra of glycopeptides, contains a diverse set of spectra from a variety of glycan types; the second tool, Glycopeptide Decoy Generator, is a new software application that generates glycopeptide decoys de novo. We developed these tools so that emerging methods of assigning glycopeptides' CID spectra could be rigorously tested. Software developers or those interested in developing skills in expert (manual) analysis can use these tools to facilitate their work. We demonstrate the tools' utility in assessing the quality of one particular glycopeptide software package, GlycoPep Grader, which assigns glycopeptides to CID spectra. We first acquired the set of 100 expert assigned CID spectra; then, we used the Decoy Generator (described herein) to generate 20 decoys per target glycopeptide. The assigned spectra and decoys were used to test the accuracy of GlycoPep Grader's scoring algorithm; new strengths and weaknesses were identified in the algorithm using this approach. Both newly developed tools are freely available. The software can be downloaded at http://glycopro.chem.ku.edu/GPJ.jar.

  8. Research of hydroelectric generating set low-frequency vibration monitoring system based on optical fiber sensing

    NASA Astrophysics Data System (ADS)

    Min, Li; Zhang, Xiaolei; Zhang, Faxiang; Sun, Zhihui; Li, ShuJuan; Wang, Meng; Wang, Chang

    2017-10-01

    In order to satisfy hydroelectric generating set low-frequency vibration monitoring, the design of Passive low-frequency vibration monitoring system based on Optical fiber sensing in this paper. The hardware of the system adopts the passive optical fiber grating sensor and unbalanced-Michelson interferometer. The software system is used to programming by Labview software and finishing the control of system. The experiment show that this system has good performance on the standard vibration testing-platform and it meets system requirements. The frequency of the monitoring system can be as low as 0.2Hz and the resolution is 0.01Hz.

  9. Cue Set Stimulation as a Factor in Human Response Generation.

    ERIC Educational Resources Information Center

    Petelle, John L.

    The hypotheses that there will be a significant difference (1) in the number of responses generated according to economic issues, (2) in the number of responses generated according to social issues, (3) in the number of responses generated between the category of economic issues and the category of social issues, (4) in cue ranking by response…

  10. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  11. A set-covering based heuristic algorithm for the periodic vehicle routing problem

    PubMed Central

    Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.

    2014-01-01

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  12. E-wave generated intraventricular diastolic vortex to L-wave relation: model-based prediction with in vivo validation.

    PubMed

    Ghosh, Erina; Caruthers, Shelton D; Kovács, Sándor J

    2014-08-01

    The Doppler echocardiographic E-wave is generated when the left ventricle's suction pump attribute initiates transmitral flow. In some subjects E-waves are accompanied by L-waves, the occurrence of which has been correlated with diastolic dysfunction. The mechanisms for L-wave generation have not been fully elucidated. We propose that the recirculating diastolic intraventricular vortex ring generates L-waves and based on this mechanism, we predict the presence of L-waves in the right ventricle (RV). We imaged intraventricular flow using Doppler echocardiography and phase-contrast magnetic resonance imaging (PC-MRI) in 10 healthy volunteers. L-waves were recorded in all subjects, with highest velocities measured typically 2 cm below the annulus. Fifty-five percent of cardiac cycles (189 of 345) had L-waves. Color M-mode images eliminated mid-diastolic transmitral flow as the cause of the observed L-waves. Three-dimensional intraventricular flow patterns were imaged via PC-MRI and independently validated our hypothesis. Additionally as predicted, L-waves were observed in the RV, by both echocardiography and PC-MRI. The re-entry of the E-wave-generated vortex ring flow through a suitably located echo sample volume can be imaged as the L-wave. These waves are a general feature and a direct consequence of LV and RV diastolic fluid mechanics. Copyright © 2014 the American Physiological Society.

  13. Calibration and validation of a phenomenological influent pollutant disturbance scenario generator using full-scale data.

    PubMed

    Flores-Alsina, Xavier; Saagi, Ramesh; Lindblom, Erik; Thirsing, Carsten; Thornberg, Dines; Gernaey, Krist V; Jeppsson, Ulf

    2014-03-15

    The objective of this paper is to demonstrate the full-scale feasibility of the phenomenological dynamic influent pollutant disturbance scenario generator (DIPDSG) that was originally used to create the influent data of the International Water Association (IWA) Benchmark Simulation Model No. 2 (BSM2). In this study, the influent characteristics of two large Scandinavian treatment facilities are studied for a period of two years. A step-wise procedure based on adjusting the most sensitive parameters at different time scales is followed to calibrate/validate the DIPDSG model blocks for: 1) flow rate; 2) pollutants (carbon, nitrogen); 3) temperature; and, 4) transport. Simulation results show that the model successfully describes daily/weekly and seasonal variations and the effect of rainfall and snow melting on the influent flow rate, pollutant concentrations and temperature profiles. Furthermore, additional phenomena such as size and accumulation/flush of particulates of/in the upstream catchment and sewer system are incorporated in the simulated time series. Finally, this study is complemented with: 1) the generation of additional future scenarios showing the effects of different rainfall patterns (climate change) or influent biodegradability (process uncertainty) on the generated time series; 2) a demonstration of how to reduce the cost/workload of measuring campaigns by filling the gaps due to missing data in the influent profiles; and, 3) a critical discussion of the presented results balancing model structure/calibration procedure complexity and prediction capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Initial validation of the prekindergarten Classroom Observation Tool and goal setting system for data-based coaching.

    PubMed

    Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H

    2013-12-01

    Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  15. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  16. Nurse staffing levels and outcomes - mining the UK national data sets for insight.

    PubMed

    Leary, Alison; Tomai, Barbara; Swift, Adrian; Woodward, Andrew; Hurst, Keith

    2017-04-18

    Purpose Despite the generation of mass data by the nursing workforce, determining the impact of the contribution to patient safety remains challenging. Several cross-sectional studies have indicated a relationship between staffing and safety. The purpose of this paper is to uncover possible associations and explore if a deeper understanding of relationships between staffing and other factors such as safety could be revealed within routinely collected national data sets. Design/methodology/approach Two longitudinal routinely collected data sets consisting of 30 years of UK nurse staffing data and seven years of National Health Service (NHS) benchmark data such as survey results, safety and other indicators were used. A correlation matrix was built and a linear correlation operation was applied (Pearson product-moment correlation coefficient). Findings A number of associations were revealed within both the UK staffing data set and the NHS benchmarking data set. However, the challenges of using these data sets soon became apparent. Practical implications Staff time and effort are required to collect these data. The limitations of these data sets include inconsistent data collection and quality. The mode of data collection and the itemset collected should be reviewed to generate a data set with robust clinical application. Originality/value This paper revealed that relationships are likely to be complex and non-linear; however, the main contribution of the paper is the identification of the limitations of routinely collected data. Much time and effort is expended in collecting this data; however, its validity, usefulness and method of routine national data collection appear to require re-examination.

  17. Radiosynthesis of clinical doses of 68Ga-DOTATATE (GalioMedix™) and validation of organic-matrix-based 68Ge/68Ga generators

    PubMed Central

    Tworowska, Izabela; Ranganathan, David; Thamake, Sanjay; Delpassand, Ebrahim; Mojtahedi, Alireza; Schultz, Michael K.; Zhernosekov, Konstantin; Marx, Sebastian

    2017-01-01

    Introduction 68Ga-DOTATATE is a radiolabeled peptide-based agonist that targets somatostatin receptors overexpressed in neuroendocrine tumors. Here, we present our results on validation of organic matrix 68Ge/68Ga generators (ITG GmbH) applied for radiosynthesis of the clinical doses of 68Ga-DOTATATE (GalioMedixTM). Methods The clinical grade of DOTATATE (25 µg±5µg) compounded in 1MNaOAc at pH=5.5 was labeled manually with 514±218MBq (13.89±5.9 mCi) of 68Ga eluate in 0.05 N HCl at 95 °C for 10 min. The radiochemical purity of the final dose was validated using radio-TLC. The quality control of clinical doses included tests of their osmolarity, endotoxin level, radionuclide identity, filter integrity, pH, sterility and 68Ge breakthrough. Results The final dose of 272±126MBq (7.35±3.4 mCi) of 68Ga-DOTATATE was produced with a radiochemical yield (RCY) of 99%±1%. The total time required for completion of radiolabeling and quality control averaged approximately 35 min. This resulted in delivery of 50% ± 7% of 68Ga-DOTATATE at the time of calibration (not decay corrected). Conclusions 68Ga eluted from the generator was directly applied for labeling of DOTA-peptide with no additional pre-concentration or pre-purification of isotope. The low acidity of 68Ga eluate allows for facile synthesis of clinical doses with radiochemical and radionuclide purity higher than 98% and average activity of 272 ± 126 MBq (7.3 ± 3 mCi). There is no need for post-labeling C18 Sep-Pak purification of final doses of radiotracer. Advances in knowledge and implications for patient care. The clinical interest in validation of 68Galabeled agents has increased in the past years due to availability of generators from different vendors (Eckert-Ziegler, ITG, iThemba), favorable approach of U.S. FDA agency to initiate clinical trials, and collaboration of U.S. centers with leading EU clinical sites. The list of 68Ga-labeled tracers evaluated in clinical studies should growth because of the

  18. Survey of Approaches to Generate Realistic Synthetic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Lee, Sangkeun; Powers, Sarah S

    A graph is a flexible data structure that can represent relationships between entities. As with other data analysis tasks, the use of realistic graphs is critical to obtaining valid research results. Unfortunately, using the actual ("real-world") graphs for research and new algorithm development is difficult due to the presence of sensitive information in the data or due to the scale of data. This results in practitioners developing algorithms and systems that employ synthetic graphs instead of real-world graphs. Generating realistic synthetic graphs that provide reliable statistical confidence to algorithmic analysis and system evaluation involves addressing technical hurdles in a broadmore » set of areas. This report surveys the state of the art in approaches to generate realistic graphs that are derived from fitted graph models on real-world graphs.« less

  19. Conceptual dissonance: evaluating the efficacy of natural language processing techniques for validating translational knowledge constructs.

    PubMed

    Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B

    2009-03-01

    The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.

  20. CombAlign: a code for generating a one-to-many sequence alignment from a set of pairwise structure-based sequence alignments.

    PubMed

    Zhou, Carol L Ecale

    2015-01-01

    In order to better define regions of similarity among related protein structures, it is useful to identify the residue-residue correspondences among proteins. Few codes exist for constructing a one-to-many multiple sequence alignment derived from a set of structure or sequence alignments, and a need was evident for creating such a tool for combining pairwise structure alignments that would allow for insertion of gaps in the reference structure. This report describes a new Python code, CombAlign, which takes as input a set of pairwise sequence alignments (which may be structure based) and generates a one-to-many, gapped, multiple structure- or sequence-based sequence alignment (MSSA). The use and utility of CombAlign was demonstrated by generating gapped MSSAs using sets of pairwise structure-based sequence alignments between structure models of the matrix protein (VP40) and pre-small/secreted glycoprotein (sGP) of Reston Ebolavirus and the corresponding proteins of several other filoviruses. The gapped MSSAs revealed structure-based residue-residue correspondences, which enabled identification of structurally similar versus differing regions in the Reston proteins compared to each of the other corresponding proteins. CombAlign is a new Python code that generates a one-to-many, gapped, multiple structure- or sequence-based sequence alignment (MSSA) given a set of pairwise sequence alignments (which may be structure based). CombAlign has utility in assisting the user in distinguishing structurally conserved versus divergent regions on a reference protein structure relative to other closely related proteins. CombAlign was developed in Python 2.6, and the source code is available for download from the GitHub code repository.

  1. Assessment of protein set coherence using functional annotations

    PubMed Central

    Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto

    2008-01-01

    Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846

  2. Time Domain Tool Validation Using ARES I-X Flight Data

    NASA Technical Reports Server (NTRS)

    Hough, Steven; Compton, James; Hannan, Mike; Brandon, Jay

    2011-01-01

    The ARES I-X vehicle was launched from NASA's Kennedy Space Center (KSC) on October 28, 2009 at approximately 11:30 EDT. ARES I-X was the first test flight for NASA s ARES I launch vehicle, and it was the first non-Shuttle launch vehicle designed and flown by NASA since Saturn. The ARES I-X had a 4-segment solid rocket booster (SRB) first stage and a dummy upper stage (US) to emulate the properties of the ARES I US. During ARES I-X pre-flight modeling and analysis, six (6) independent time domain simulation tools were developed and cross validated. Each tool represents an independent implementation of a common set of models and parameters in a different simulation framework and architecture. Post flight data and reconstructed models provide the means to validate a subset of the simulations against actual flight data and to assess the accuracy of pre-flight dispersion analysis. Post flight data consists of telemetered Operational Flight Instrumentation (OFI) data primarily focused on flight computer outputs and sensor measurements as well as Best Estimated Trajectory (BET) data that estimates vehicle state information from all available measurement sources. While pre-flight models were found to provide a reasonable prediction of the vehicle flight, reconstructed models were generated to better represent and simulate the ARES I-X flight. Post flight reconstructed models include: SRB propulsion model, thrust vector bias models, mass properties, base aerodynamics, and Meteorological Estimated Trajectory (wind and atmospheric data). The result of the effort is a set of independently developed, high fidelity, time-domain simulation tools that have been cross validated and validated against flight data. This paper presents the process and results of high fidelity aerospace modeling, simulation, analysis and tool validation in the time domain.

  3. Validity of the Elite HRV Smartphone Application for Examining Heart Rate Variability in a Field-Based Setting.

    PubMed

    Perrotta, Andrew S; Jeklin, Andrew T; Hives, Ben A; Meanwell, Leah E; Warburton, Darren E R

    2017-08-01

    Perrotta, AS, Jeklin, AT, Hives, BA, Meanwell, LE, and Warburton, DER. Validity of the elite HRV smartphone application for examining heart rate variability in a field-based setting. J Strength Cond Res 31(8): 2296-2302, 2017-The introduction of smartphone applications has allowed athletes and practitioners to record and store R-R intervals on smartphones for immediate heart rate variability (HRV) analysis. This user-friendly option should be validated in the effort to provide practitioners confidence when monitoring their athletes before implementing such equipment. The objective of this investigation was to examine the relationship and validity between a vagal-related HRV index, rMSSD, when derived from a smartphone application accessible with most operating systems against a frequently used computer software program, Kubios HRV 2.2. R-R intervals were recorded immediately upon awakening over 14 consecutive days using the Elite HRV smartphone application. R-R recordings were then exported into Kubios HRV 2.2 for analysis. The relationship and levels of agreement between rMSSDln derived from Elite HRV and Kubios HRV 2.2 was examined using a Pearson product-moment correlation and a Bland-Altman Plot. An extremely large relationship was identified (r = 0.92; p < 0.0001; confidence interval [CI] 95% = 0.90-0.93). A total of 6.4% of the residuals fell outside the 1.96 ± SD (CI 95% = -12.0 to 7.0%) limits of agreement. A negative bias was observed (mean: -2.7%; CI 95% = -3.10 to -2.30%), whose CI 95% failed to fall within the line of equality. Our observations demonstrated differences between the two sources of HRV analysis. However, further research is warranted, as this smartphone HRV application may offer a reliable platform when assessing parasympathetic modulation.

  4. SCIAMACHY validation by aircraft remote measurements: design, execution, and first results of the SCIA-VALUE mission

    NASA Astrophysics Data System (ADS)

    Fix, A.; Ehret, G.; Flentje, H.; Poberaj, G.; Gottwald, M.; Finkenzeller, H.; Bremer, H.; Bruns, M.; Burrows, J. P.; Kleinböhl, A.; Küllmann, H.; Kuttippurath, J.; Richter, A.; Wang, P.; Heue, K.-P.; Platt, U.; Wagner, T.

    2004-12-01

    For the first time three different remote sensing instruments - a sub-millimeter radiometer, a differential optical absorption spectrometer in the UV-visible spectral range, and a lidar - were deployed aboard DLR's meteorological research aircraft Falcon 20 to validate a large number of SCIAMACHY level 2 and off-line data products such as O3, NO2, N2O, BrO, OClO, H2O, aerosols, and clouds. Within two main validation campaigns of the SCIA-VALUE mission (SCIAMACHY VALidation and Utilization Experiment) extended latitudinal cross-sections stretching from polar regions to the tropics as well as longitudinal cross sections at polar latitudes at about 70° N and the equator have been generated. This contribution gives an overview over the campaigns performed and reports on the observation strategy for achieving the validation goals. We also emphasize the synergetic use of the novel set of aircraft instrumentation and the usefulness of this innovative suite of remote sensing instruments for satellite validation.

  5. Wind and solar resource data sets: Wind and solar resource data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, Andrew; Hodge, Bri-Mathias; Draxl, Caroline

    The range of resource data sets spans from static cartography showing the mean annual wind speed or solar irradiance across a region to high temporal and high spatial resolution products that provide detailed information at a potential wind or solar energy facility. These data sets are used to support continental-scale, national, or regional renewable energy development; facilitate prospecting by developers; and enable grid integration studies. This review first provides an introduction to the wind and solar resource data sets, then provides an overview of the common methods used for their creation and validation. A brief history of wind and solarmore » resource data sets is then presented, followed by areas for future research.« less

  6. The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set

    PubMed Central

    Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad

    2013-01-01

    Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434

  7. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  8. Brief surgical procedure code lists for outcomes measurement and quality improvement in resource-limited settings.

    PubMed

    Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul

    2017-11-01

    The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource

  9. Validation of an automatically generated screening score for frailty: the care assessment need (CAN) score.

    PubMed

    Ruiz, Jorge G; Priyadarshni, Shivani; Rahaman, Zubair; Cabrera, Kimberly; Dang, Stuti; Valencia, Willy M; Mintzer, Michael J

    2018-05-04

    Frailty is a state of vulnerability to stressors that is prevalent in older adults and is associated with higher morbidity, mortality and healthcare utilization. Multiple instruments are used to measure frailty; most are time-consuming. The Care Assessment Need (CAN) score is automatically generated from electronic health record data using a statistical model. The methodology for calculation of the CAN score is consistent with the deficit accumulation model of frailty. At a 95 percentile, the CAN score is a predictor of hospitalization and mortality in Veteran populations. The purpose of this study was to validate the CAN score as a screening tool for frailty in primary care. This is a cross-sectional, validation study compared the CAN score with a 40-item Frailty Index reference standard based on a comprehensive geriatric assessment. We included community-dwelling male patients over age 65 from an outpatient geriatric medicine clinic. We calculated the sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of the CAN score. 184 patients over age 65 were included in the study: 97.3% male, 64.2% White, 80.9% non-Hispanic. The CGA-based Frailty Index defined 14.1% as robust, 53.3% as prefrail and 32.6% as frail. For the frail, statistical analysis demonstrated that a CAN score of 55 provides sensitivity, specificity, PPV and NPV of 91.67, 40.32, 42.64 and 90.91% respectively whereas at a score of 95 the sensitivity, specificity, PPV and NPV were 43.33, 88.81, 63.41, 77.78% respectively. Area under the receiver operating characteristics curve was 0.736 (95% CI = .661-.811). CAN score is a potential screening tool for frailty among older adults; it is generated automatically and provides acceptable diagnostic accuracy. Hence, the CAN score may be a useful tool to primary care providers for detection of frailty in their patient panels.

  10. Validation of Autoclave Protocols for Successful Decontamination of Category A Medical Waste Generated from Care of Patients with Serious Communicable Diseases

    PubMed Central

    Reimers, Mallory; Ernst, Neysa; Bova, Gregory; Nowakowski, Elaine; Bukowski, James; Ellis, Brandon C.; Smith, Chris; Sauer, Lauren; Dionne, Kim; Carroll, Karen C.; Maragakis, Lisa L.; Parrish, Nicole M.

    2016-01-01

    ABSTRACT In response to the Ebola outbreak in 2014, many hospitals designated specific areas to care for patients with Ebola and other highly infectious diseases. The safe handling of category A infectious substances is a unique challenge in this environment. One solution is on-site waste treatment with a steam sterilizer or autoclave. The Johns Hopkins Hospital (JHH) installed two pass-through autoclaves in its biocontainment unit (BCU). The JHH BCU and The Johns Hopkins biosafety level 3 (BSL-3) clinical microbiology laboratory designed and validated waste-handling protocols with simulated patient trash to ensure adequate sterilization. The results of the validation process revealed that autoclave factory default settings are potentially ineffective for certain types of medical waste and highlighted the critical role of waste packaging in successful sterilization. The lessons learned from the JHH validation process can inform the design of waste management protocols to ensure effective treatment of highly infectious medical waste. PMID:27927920

  11. Validation of Autoclave Protocols for Successful Decontamination of Category A Medical Waste Generated from Care of Patients with Serious Communicable Diseases.

    PubMed

    Garibaldi, Brian T; Reimers, Mallory; Ernst, Neysa; Bova, Gregory; Nowakowski, Elaine; Bukowski, James; Ellis, Brandon C; Smith, Chris; Sauer, Lauren; Dionne, Kim; Carroll, Karen C; Maragakis, Lisa L; Parrish, Nicole M

    2017-02-01

    In response to the Ebola outbreak in 2014, many hospitals designated specific areas to care for patients with Ebola and other highly infectious diseases. The safe handling of category A infectious substances is a unique challenge in this environment. One solution is on-site waste treatment with a steam sterilizer or autoclave. The Johns Hopkins Hospital (JHH) installed two pass-through autoclaves in its biocontainment unit (BCU). The JHH BCU and The Johns Hopkins biosafety level 3 (BSL-3) clinical microbiology laboratory designed and validated waste-handling protocols with simulated patient trash to ensure adequate sterilization. The results of the validation process revealed that autoclave factory default settings are potentially ineffective for certain types of medical waste and highlighted the critical role of waste packaging in successful sterilization. The lessons learned from the JHH validation process can inform the design of waste management protocols to ensure effective treatment of highly infectious medical waste. Copyright © 2017 American Society for Microbiology.

  12. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize

  13. A validation study regarding a generative approach in choosing appropriate colors for impaired users.

    PubMed

    Troiano, Luigi; Birtolo, Cosimo; Armenise, Roberto

    2016-01-01

    In many circumstances, concepts, ideas and emotions are mainly conveyed by colors. Color vision disorders can heavily limit the user experience in accessing Information Society. Therefore, color vision impairments should be taken into account in order to make information and services accessible to a broader audience. The task is not easy for designers that generally are not affected by any color vision disorder. In any case, the design of accessible user interfaces should not lead to to boring color schemes. The selection of appealing and harmonic color combinations should be preserved. In past research we investigated a generative approach led by evolutionary computing in supporting interface designers to make colors accessible to impaired users. This approach has also been followed by other authors. The contribution of this paper is to provide an experimental validation to the claim that this approach is actually beneficial to designers and users.

  14. Direct Validation of Differential Prediction.

    ERIC Educational Resources Information Center

    Lunneborg, Clifford E.

    Using academic achievement data for 655 University students, direct validation of differential predictions based on a battery of aptitude/achievement measures selected for their differential prediction efficiency was attempted. In the cross-validation of the prediction of actual differences among five academic area GPA's, this set of differential…

  15. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    PubMed

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Principles and Recommendations for Standardizing the Use of the Next-Generation Sequencing Variant File in Clinical Settings.

    PubMed

    Lubin, Ira M; Aziz, Nazneen; Babb, Lawrence J; Ballinger, Dennis; Bisht, Himani; Church, Deanna M; Cordes, Shaun; Eilbeck, Karen; Hyland, Fiona; Kalman, Lisa; Landrum, Melissa; Lockhart, Edward R; Maglott, Donna; Marth, Gabor; Pfeifer, John D; Rehm, Heidi L; Roy, Somak; Tezak, Zivana; Truty, Rebecca; Ullman-Cullere, Mollie; Voelkerding, Karl V; Worthey, Elizabeth A; Zaranek, Alexander W; Zook, Justin M

    2017-05-01

    A national workgroup convened by the Centers for Disease Control and Prevention identified principles and made recommendations for standardizing the description of sequence data contained within the variant file generated during the course of clinical next-generation sequence analysis for diagnosing human heritable conditions. The specifications for variant files were initially developed to be flexible with regard to content representation to support a variety of research applications. This flexibility permits variation with regard to how sequence findings are described and this depends, in part, on the conventions used. For clinical laboratory testing, this poses a problem because these differences can compromise the capability to compare sequence findings among laboratories to confirm results and to query databases to identify clinically relevant variants. To provide for a more consistent representation of sequence findings described within variant files, the workgroup made several recommendations that considered alignment to a common reference sequence, variant caller settings, use of genomic coordinates, and gene and variant naming conventions. These recommendations were considered with regard to the existing variant file specifications presently used in the clinical setting. Adoption of these recommendations is anticipated to reduce the potential for ambiguity in describing sequence findings and facilitate the sharing of genomic data among clinical laboratories and other entities. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  17. Current Concerns in Validity Theory.

    ERIC Educational Resources Information Center

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  18. Generation, Analysis and Characterization of Anisotropic Engineered Meta Materials

    NASA Astrophysics Data System (ADS)

    Trifale, Ninad T.

    A methodology for a systematic generation of highly anisotropic micro-lattice structures was investigated. Multiple algorithms for generation and validation of engineered structures are developed and evaluated. Set of all possible permutations of structures for an 8-node cubic unit cell were considered and the degree of anisotropy of meta-properties in heat transport and mechanical elasticity were evaluated. Feasibility checks were performed to ensure that the generated unit cell network was repeatable and a continuous lattice structure. Four different strategies for generating permutations of the structures are discussed. Analytical models were developed to predict effective thermal, mechanical and permeability characteristics of these cellular structures.Experimentation and numerical modeling techniques were used to validate the models that are developed. A self-consistent mechanical elasticity model was developed which connects the meso-scale properties to stiffness of individual struts. A three dimensional thermal resistance network analogy was used to evaluate the effective thermal conductivity of the structures. The struts were modeled as a network of one dimensional thermal resistive elements and effective conductivity evaluated. Models were validated against numerical simulations and experimental measurements on 3D printed samples. Model was developed to predict effective permeability of these engineered structures based on Darcy's law. Drag coefficients were evaluated for individual connections in transverse and longitudinal directions and an interaction term was calibrated from the experimental data in literature in order to predict permeability. Generic optimization framework coupled to finite element solver is developed for analyzing any application involving use of porous structures. An objective functions were generated structure to address frequently observed trade-off between the stiffness, thermal conductivity, permeability and porosity. Three

  19. Novel Primer Sets for Next Generation Sequencing-Based Analyses of Water Quality

    PubMed Central

    Lee, Elvina; Khurana, Maninder S.; Whiteley, Andrew S.; Monis, Paul T.; Bath, Andrew; Gordon, Cameron; Ryan, Una M.; Paparini, Andrea

    2017-01-01

    Next generation sequencing (NGS) has rapidly become an invaluable tool for the detection, identification and relative quantification of environmental microorganisms. Here, we demonstrate two new 16S rDNA primer sets, which are compatible with NGS approaches and are primarily for use in water quality studies. Compared to 16S rRNA gene based universal primers, in silico and experimental analyses demonstrated that the new primers showed increased specificity for the Cyanobacteria and Proteobacteria phyla, allowing increased sensitivity for the detection, identification and relative quantification of toxic bloom-forming microalgae, microbial water quality bioindicators and common pathogens. Significantly, Cyanobacterial and Proteobacterial sequences accounted for ca. 95% of all sequences obtained within NGS runs (when compared to ca. 50% with standard universal NGS primers), providing higher sensitivity and greater phylogenetic resolution of key water quality microbial groups. The increased selectivity of the new primers allow the parallel sequencing of more samples through reduced sequence retrieval levels required to detect target groups, potentially reducing NGS costs by 50% but still guaranteeing optimal coverage and species discrimination. PMID:28118368

  20. Assessment of mutual understanding of physician patient encounters: development and validation of a Mutual Understanding Scale (MUS) in a multicultural general practice setting.

    PubMed

    Harmsen, J A M; Bernsen, R M D; Meeuwesen, L; Pinto, D; Bruijnzeels, M A

    2005-11-01

    Mutual understanding between physician and patient is essential for good quality of care; however, both parties have different views on health complaints and treatment. This study aimed to develop and validate a measure of mutual understanding (MU) in a multicultural setting. The study included 986 patients from 38 general practices. GPs completed a questionnaire and patients were interviewed after the consultation. To assess mutual understanding the answers from GP and patient to questions about different consultation aspects were compared. An expert panel, using nominal group technique, developed criteria for mutual understanding on consultation aspects and secondly, established a ranking to combine all aspects into an overall consultation judgement. Regarding construct validity, patients' ethnicity, age and language proficiency were the most important predictors for MU. Regarding criterion validity, all GP-related criteria (the GPs perception of his ability to explain to the patient, the patient's ability to explain to the GP, and the patient's understanding of consultation aspects), were well-related to MU. The same can be said of patient's consultation satisfaction and feeling that the GP was considerate. We conclude that the Mutual Understanding Scale is regarded a reliable and valid measure to be used in large-scale quantitative studies.

  1. Validity and reliability of the de Morton Mobility Index in the subacute hospital setting in a geriatric evaluation and management population.

    PubMed

    de Morton, Natalie A; Lane, Kylie

    2010-11-01

    To investigate the clinimetric properties of the de Morton Mobility Index (DEMMI) in a Geriatric Evaluation and Management (GEM) population. A longitudinal validation study (n = 100) and inter-rater reliability study (n = 29) in a GEM population. Consecutive patients admitted to a GEM rehabilitation ward were eligible for inclusion. At hospital admission and discharge, a physical therapist assessed patients with physical performance instruments that included the 6-metre walk test, step test, Clinical Test of Sensory Organization and Balance, Timed Up and Go test, 6-minute walk test and the DEMMI. Consecutively eligible patients were included in an inter-rater reliability study between physical therapists. DEMMI admission scores were normally distributed (mean 30.2, standard deviation 16.7) and other activity limitation instruments had either a floor or a ceiling effect. Evidence of convergent, discriminant and known groups validity for the DEMMI were obtained. The minimal detectable change with 90% confidence was 10.5 (95% confidence interval 6.1-17.9) points and the minimally clinically important difference was 8.4 points on the 100-point interval DEMMI scale. The DEMMI provides clinicians with an accurate and valid method of measuring mobility for geriatric patients in the subacute hospital setting.

  2. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    NASA Astrophysics Data System (ADS)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  3. Development and validation of a new instrument for testing functional health literacy in Japanese adults.

    PubMed

    Nakagami, Katsuyuki; Yamauchi, Toyoaki; Noguchi, Hiroyuki; Maeda, Tohru; Nakagami, Tomoko

    2014-06-01

    This study aimed to develop a reliable and valid measure of functional health literacy in a Japanese clinical setting. Test development consisted of three phases: generation of an item pool, consultation with experts to assess content validity, and comparison with external criteria (the Japanese Health Knowledge Test) to assess criterion validity. A trial version of the test was administered to 535 Japanese outpatients. Internal consistency reliability, calculated by Cronbach's alpha, was 0.81, and concurrent validity was moderate. Receiver Operating Characteristics and Item Response Theory were used to classify patients as having adequate, marginal, or inadequate functional health literacy. Both inadequate and marginal functional health literacy were associated with older age, lower income, lower educational attainment, and poor health knowledge. The time required to complete the test was 10-15 min. This test should enable health workers to better identify patients with inadequate health literacy. © 2013 Wiley Publishing Asia Pty Ltd.

  4. Coupling of electromagnetic and structural dynamics for a wind turbine generator

    NASA Astrophysics Data System (ADS)

    Matzke, D.; Rick, S.; Hollas, S.; Schelenz, R.; Jacobs, G.; Hameyer, K.

    2016-09-01

    This contribution presents a model interface of a wind turbine generator to represent the reciprocal effects between the mechanical and the electromagnetic system. Therefore, a multi-body-simulation (MBS) model in Simpack is set up and coupled with a quasi-static electromagnetic (EM) model of the generator in Matlab/Simulink via co-simulation. Due to lack of data regarding the structural properties of the generator the modal properties of the MBS model are fitted with respect to results of an experimental modal analysis (EMA) on the reference generator. The used method and the results of this approach are presented in this paper. The MB S model and the interface are set up in such a way that the EM forces can be applied to the structure and the response of the structure can be fed back to the EM model. The results of this cosimulation clearly show an influence of the feedback of the mechanical response which is mainly damping in the torsional degree of freedom and effects due to eccentricity in radial direction. The accuracy of these results will be validated via test bench measurements and presented in future work. Furthermore it is suggested that the EM model should be adjusted in future works so that transient effects are represented.

  5. SCIAMACHY validation by aircraft remote sensing: design, execution, and first measurement results of the SCIA-VALUE mission

    NASA Astrophysics Data System (ADS)

    Fix, A.; Ehret, G.; Flentje, H.; Poberaj, G.; Gottwald, M.; Finkenzeller, H.; Bremer, H.; Bruns, M.; Burrows, J. P.; Kleinböhl, A.; Küllmann, H.; Kuttippurath, J.; Richter, A.; Wang, P.; Heue, K.-P.; Platt, U.; Pundt, I.; Wagner, T.

    2005-05-01

    For the first time three different remote sensing instruments - a sub-millimeter radiometer, a differential optical absorption spectrometer in the UV-visible spectral range, and a lidar - were deployed aboard DLR's meteorological research aircraft Falcon 20 to validate a large number of SCIAMACHY level 2 and off-line data products such as O3, NO2, N2O, BrO, OClO, H2O, aerosols, and clouds. Within two validation campaigns of the SCIA-VALUE mission (SCIAMACHY VALidation and Utilization Experiment) extended latitudinal cross-sections stretching from polar regions to the tropics as well as longitudinal cross sections at polar latitudes at about 70° N and the equator were generated. This contribution gives an overview over the campaigns performed and reports on the observation strategy for achieving the validation goals. We also emphasize the synergetic use of the novel set of aircraft instrumentation and the usefulness of this innovative suite of remote sensing instruments for satellite validation.

  6. Pan-European stochastic flood event set

    NASA Astrophysics Data System (ADS)

    Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav

    2017-04-01

    Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as

  7. The Virtual Care Climate Questionnaire: Development and Validation of a Questionnaire Measuring Perceived Support for Autonomy in a Virtual Care Setting.

    PubMed

    Smit, Eline Suzanne; Dima, Alexandra Lelia; Immerzeel, Stephanie Annette Maria; van den Putte, Bas; Williams, Geoffrey Colin

    2017-05-08

    Web-based health behavior change interventions may be more effective if they offer autonomy-supportive communication facilitating the internalization of motivation for health behavior change. Yet, at this moment no validated tools exist to assess user-perceived autonomy-support of such interventions. The aim of this study was to develop and validate the virtual climate care questionnaire (VCCQ), a measure of perceived autonomy-support in a virtual care setting. Items were developed based on existing questionnaires and expert consultation and were pretested among experts and target populations. The virtual climate care questionnaire was administered in relation to Web-based interventions aimed at reducing consumption of alcohol (Study 1; N=230) or cannabis (Study 2; N=228). Item properties, structural validity, and reliability were examined with item-response and classical test theory methods, and convergent and divergent validity via correlations with relevant concepts. In Study 1, 20 of 23 items formed a one-dimensional scale (alpha=.97; omega=.97; H=.66; mean 4.9 [SD 1.0]; range 1-7) that met the assumptions of monotonicity and invariant item ordering. In Study 2, 16 items fitted these criteria (alpha=.92; H=.45; omega=.93; mean 4.2 [SD 1.1]; range 1-7). Only 15 items remained in the questionnaire in both studies, thus we proceeded to the analyses of the questionnaire's reliability and construct validity with a 15-item version of the virtual climate care questionnaire. Convergent validity of the resulting 15-item virtual climate care questionnaire was confirmed by positive associations with autonomous motivation (Study 1: r=.66, P<.001; Study 2: r=.37, P<.001) and perceived competence for reducing alcohol intake (Study 1: r=.52, P<.001). Divergent validity could only be confirmed by the nonsignificant association with perceived competence for learning (Study 2: r=.05, P=.48). The virtual climate care questionnaire accurately assessed participants' perceived

  8. Isolated Open Rotor Noise Prediction Assessment Using the F31A31 Historical Blade Set

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, William T.; Boyd, D. Douglas, Jr.; Zawodny, Nikolas S.

    2016-01-01

    In an effort to mitigate next-generation fuel efficiency and environmental impact concerns for aviation, open rotor propulsion systems have received renewed interest. However, maintaining the high propulsive efficiency while simultaneously meeting noise goals has been one of the challenges in making open rotor propulsion a viable option. Improvements in prediction tools and design methodologies have opened the design space for next generation open rotor designs that satisfy these challenging objectives. As such, validation of aerodynamic and acoustic prediction tools has been an important aspect of open rotor research efforts. This paper describes validation efforts of a combined computational fluid dynamics and Ffowcs Williams and Hawkings equation methodology for open rotor aeroacoustic modeling. Performance and acoustic predictions were made for a benchmark open rotor blade set and compared with measurements over a range of rotor speeds and observer angles. Overall, the results indicate that the computational approach is acceptable for assessing low-noise open rotor designs. Additionally, this approach may be used to provide realistic incident source fields for acoustic shielding/scattering studies on various aircraft configurations.

  9. Validity of the Student Risk Screening Scale: Evidence of Predictive Validity in a Diverse, Suburban Elementary Setting

    ERIC Educational Resources Information Center

    Menzies, Holly M.; Lane, Kathleen Lynne

    2012-01-01

    In this study the authors examined the psychometric properties of the "Student Risk Screening Scale" (SRSS), including predictive validity in terms of student outcomes in behavioral and academic domains. The school, a diverse, suburban school in Southern California, administered the SRSS at three time points as part of regular school…

  10. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues.

    PubMed

    Mourya, Devendra T; Yadav, Pragya D; Khare, Ajay; Khan, Anwar H

    2017-10-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  11. Validation of high throughput sequencing and microbial forensics applications

    PubMed Central

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security. PMID:25101166

  12. Validation of high throughput sequencing and microbial forensics applications.

    PubMed

    Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.

  13. Individualized quality of life in patients with low back pain: reliability and validity of the Patient Generated Index.

    PubMed

    Løchting, Ida; Grotle, Margreth; Storheim, Kjersti; Werner, Erik L; Garratt, Andrew M

    2014-09-01

    To evaluate the reliability and validity of the improved version of the Patient Generated Index (PGI) in patients with low back pain. The PGI was administered to 90 patients attending care in 1 of 6 institutions in Norway and evaluated for reliability and validity. The questionnaire was given out to 61 patients for re-test purposes. The PGI was completed correctly by 80 (88.9%) patients and, of the 61 patients responding to the re-test, 50 (82.0%) completed both surveys correctly. PGI scores were approximately normally distributed, with a median of 40 (range 80), where 100 is the best possible quality of life. There were no floor or ceiling effects. The 5 most frequently listed areas affecting quality of life were pain, sleep, stiffness, socializing and housework. The test-retest intraclass correlation coefficient was 0.73. The smallest detectable changes for individual and group purposes were 32.8 and 4.6, respectively. The correlations between PGI scores and other instrument scores followed a priori hypotheses of low to moderate correlations. The PGI has evidence for reliability and validity in Norwegian patients with low back pain at the group level and may be considered for application in intervention studies when a comprehensive evaluation of quality of life is important. However, the smallest detectable change, of approximately 30 points, may be considered too large for individual purposes in clinical applications.

  14. Validation of sterilizing grade filtration.

    PubMed

    Jornitz, M W; Meltzer, T H

    2003-01-01

    Validation consideration of sterilizing grade filters, namely 0.2 micron, changed when FDA voiced concerns about the validity of Bacterial Challenge tests performed in the past. Such validation exercises are nowadays considered to be filter qualification. Filter validation requires more thorough analysis, especially Bacterial Challenge testing with the actual drug product under process conditions. To do so, viability testing is a necessity to determine the Bacterial Challenge test methodology. Additionally to these two compulsory tests, other evaluations like extractable, adsorption and chemical compatibility tests should be considered. PDA Technical Report # 26, Sterilizing Filtration of Liquids, describes all parameters and aspects required for the comprehensive validation of filters. The report is a most helpful tool for validation of liquid filters used in the biopharmaceutical industry. It sets the cornerstones of validation requirements and other filtration considerations.

  15. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology.

    PubMed

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe

    2012-09-01

    In recent years, segmentation algorithms and activity quantification methods have been proposed for oncological (18)F-fluorodeoxyglucose (FDG) PET. A full assessment of these algorithms, necessary for a clinical transfer, requires a validation on data sets provided with a reliable ground truth as to the imaged activity distribution, which must be as realistic as possible. The aim of this work is to propose a strategy to simulate lesions of uniform uptake and irregular shape in an anthropomorphic phantom, with the possibility to easily obtain a ground truth as to lesion activity and borders. Lesions were simulated with samples of clinoptilolite, a family of natural zeolites of irregular shape, able to absorb aqueous solutions of (18)F-FDG, available in a wide size range, and nontoxic. Zeolites were soaked in solutions of (18)F-FDG for increasing times up to 120 min and their absorptive properties were characterized as function of soaking duration, solution concentration, and zeolite dry weight. Saturated zeolites were wrapped in Parafilm, positioned inside an Alderson thorax-abdomen phantom and imaged with a PET-CT scanner. The ground truth for the activity distribution of each zeolite was obtained by segmenting high-resolution finely aligned CT images, on the basis of independently obtained volume measurements. The fine alignment between CT and PET was validated by comparing the CT-derived ground truth to a set of zeolites' PET threshold segmentations in terms of Dice index and volume error. The soaking time necessary to achieve saturation increases with zeolite dry weight, with a maximum of about 90 min for the largest sample. At saturation, a linear dependence of the uptake normalized to the solution concentration on zeolite dry weight (R(2) = 0.988), as well as a uniform distribution of the activity over the entire zeolite volume from PET imaging were demonstrated. These findings indicate that the (18)F-FDG solution is able to saturate the zeolite pores and that

  16. Rank-order-selective neurons form a temporal basis set for the generation of motor sequences.

    PubMed

    Salinas, Emilio

    2009-04-08

    Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain.

  17. RANK-ORDER-SELECTIVE NEURONS FORM A TEMPORAL BASIS SET FOR THE GENERATION OF MOTOR SEQUENCES

    PubMed Central

    Salinas, Emilio

    2009-01-01

    Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain. PMID:19357265

  18. Equivalent circuit and characteristic simulation of a brushless electrically excited synchronous wind power generator

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Zhang, Fengge; Guan, Tao; Yu, Siyang

    2017-09-01

    A brushless electrically excited synchronous generator (BEESG) with a hybrid rotor is a novel electrically excited synchronous generator. The BEESG proposed in this paper is composed of a conventional stator with two different sets of windings with different pole numbers, and a hybrid rotor with powerful coupling capacity. The pole number of the rotor is different from those of the stator windings. Thus, an analysis method different from that applied to conventional generators should be applied to the BEESG. In view of this problem, the equivalent circuit and electromagnetic torque expression of the BEESG are derived on the basis of electromagnetic relation of the proposed generator. The generator is simulated and tested experimentally using the established equivalent circuit model. The experimental and simulation data are then analyzed and compared. Results show the validity of the equivalent circuit model.

  19. Experimental validation of an 8 element EMAT phased array probe for longitudinal wave generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Bourdais, Florian, E-mail: florian.lebourdais@cea.fr; Marchand, Benoit, E-mail: florian.lebourdais@cea.fr

    2015-03-31

    Sodium cooled Fast Reactors (SFR) use liquid sodium as a coolant. Liquid sodium being opaque, optical techniques cannot be applied to reactor vessel inspection. This makes it necessary to develop alternative ways of assessing the state of the structures immersed in the medium. Ultrasonic pressure waves are well suited for inspection tasks in this environment, especially using pulsed electromagnetic acoustic transducers (EMAT) that generate the ultrasound directly in the liquid sodium. The work carried out at CEA LIST is aimed at developing phased array EMAT probes conditioned for reactor use. The present work focuses on the experimental validation of amore » newly manufactured 8 element probe which was designed for beam forming imaging in a liquid sodium environment. A parametric study is carried out to determine the optimal setup of the magnetic assembly used in this probe. First laboratory tests on an aluminium block show that the probe has the required beam steering capabilities.« less

  20. Computer-Generated, Three-Dimensional Spine Model From Biplanar Radiographs: A Validity Study in Idiopathic Scoliosis Curves Greater Than 50 Degrees.

    PubMed

    Carreau, Joseph H; Bastrom, Tracey; Petcharaporn, Maty; Schulte, Caitlin; Marks, Michelle; Illés, Tamás; Somoskeöy, Szabolcs; Newton, Peter O

    2014-03-01

    Reproducibility study of SterEOS 3-dimensional (3D) software in large, idiopathic scoliosis (IS) spinal curves. To determine the accuracy and reproducibility of various 3D, software-generated radiographic measurements acquired from a 2-dimensional (2D) imaging system. SterEOS software allows a user to reconstruct a 3D spinal model from an upright, biplanar, low-dose, X-ray system. The validity and internal consistency of this system have not been tested in large IS curves. EOS images from 30 IS patients with curves greater than 50° were collected for analysis. Three observers blinded to the study protocol conducted repeated, randomized, manual 2D measurements, and 3D software generated measurements from biplanar images acquired from an EOS Imaging system. Three-dimensional measurements were repeated using both the Full 3D and Fast 3D guided processes. A total of 180 (120 3D and 60 2D) sets of measurements were obtained of coronal (Cobb angle) and sagittal (T1-T12 and T4-T12 kyphosis; L1-S1 and L1-L5; and pelvic tilt, pelvic incidence, and sacral slope) parameters. Intra-class correlation coefficients were compared, as were the calculated differences in values generated by SterEOS 3D software and manual 2D measurements. The 95% confidence intervals of the mean differences in measures were calculated as an estimate of reproducibility. Average intra-class correlation coefficients were excellent: 0.97, 0.97, and 0.93 for Full 3D, Fast 3D, and 2D measures, respectively (p = .11). Measurement errors for some sagittal measures were significantly lower with the 3D techniques. Both the Full 3D and Fast 3D techniques provided consistent measurements of axial plane vertebral rotation. SterEOS 3D reconstruction spine software creates reproducible measurements in all 3 planes of deformity in curves greater than 50°. Advancements in 3D scoliosis imaging are expected to improve our understanding and treatment of idiopathic scoliosis. Copyright © 2014 Scoliosis Research Society

  1. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    PubMed Central

    Mourya, Devendra T.; Yadav, Pragya D.; Khare, Ajay; Khan, Anwar H.

    2017-01-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process. PMID:29434059

  2. Validation of the classification criteria commonly used in Korea and a modified set of preliminary criteria for Behçet's disease: a multi-center study.

    PubMed

    Chang, H K; Lee, S S; Bai, H J; Lee, Y W; Yoon, B Y; Lee, C H; Lee, Y H; Song, G G; Chung, W T; Lee, S W; Choe, J Y; Kim, C G; Chang, D K

    2004-01-01

    Recently we have proposed a modified set of criteria to settle the questions raised regarding the International Study Group (ISG) criteria for Behçet's disease (BD). The aim of the present study was to validate the two pre-existing criteria sets commonly used in Korea, the ISG criteria and the criteria of the Behçet's Disease Research Committee of Japan (Japanese criteria), as well as the proposed modified criteria. The study population included 155 consecutive patients with BD and 170 controls with non-Behçet's rheumatic diseases. Detailed data for all of the subjects were recorded prospectively by the participating physicians on a standard form that listed the clinical features of BD. The sensitivity, specificity, and accuracy of each set of the criteria were measured. Of the three criteria sets employed, the modified criteria were the most accurate, with an accuracy of 96.3%. The ISG criteria often failed to classify the following patients with BD: patients with only oral and genital ulcerations, certain patients with intestinal ulcerations, patients who did not manifest oral ulcerations, and patients with acute disease but fewer than three recurrent oral ulceration relapses in a 1-year period. The Japanese criteria also failed to categorize the following patients with BD: patients with oral and genital ulcerations, and patients with oral ulcerations, skin lesions, and a positive pathergy reaction. In addition, the Japanese criteria misclassified some of the control subjects with non-Behçet's uveitis as having BD. The results of this study suggest that there are some points that need to be reconsidered in the clinical application of the two pre-existing sets of criteria. Although the modified criteria were the most accurate, further validation studies will be required in other ethnic populations.

  3. Oral/dental items in the resident assessment instrument - minimum Data Set 2.0 lack validity: results of a retrospective, longitudinal validation study.

    PubMed

    Hoben, Matthias; Poss, Jeffrey W; Norton, Peter G; Estabrooks, Carole A

    2016-01-01

    Oral health in nursing home residents is poor. Robust, mandated assessment tools such as the Resident Assessment Instrument - Minimum Data Set (RAI-MDS) 2.0 are key to monitoring and improving quality of oral health care in nursing homes. However, psychometric properties of RAI-MDS 2.0 oral/dental items have been challenged and criterion validity of these items has never been assessed. We used 73,829 RAI-MDS 2.0 records (13,118 residents), collected in a stratified random sample of 30 urban nursing homes in Western Canada (2007-2012). We derived a subsample of all residents ( n  = 2,711) with an admission and two or more subsequent annual assessments. Using Generalized Estimating Equations, adjusted for known covariates of nursing home residents' oral health, we assessed the association of oral/dental problems with time, dentate status, dementia, debris, and daily cleaning. Prevalence of oral/dental problems fluctuated (4.8 %-5.6 %) with no significant differences across time. This range of prevalence is substantially smaller than the ones reported by studies using clinical assessments by dental professionals. Denture wearers were less likely than dentate residents to have oral/dental problems (adjusted odds ratio [OR] = 0.458, 95 % confidence interval [CI]: 0.308, 0.680). Residents lacking teeth and not wearing dentures had higher odds than dentate residents of oral/dental problems (adjusted OR = 2.718, 95 % CI: 1.845, 4.003). Oral/dental problems were more prevalent in persons with debris (OR = 2.187, 95 % CI: 1.565, 3.057). Of the other variables assessed, only age at assessment was significantly associated with oral/dental problems. Robust, reliable RAI-MDS 2.0 oral health indicators are vital to monitoring and improving oral health related quality and safety in nursing homes. However, severe underdetection of oral/dental problems and lack of association of well-known oral health predictors with oral/dental problems suggest validity

  4. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    PubMed

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. A Python tool to set up relative free energy calculations in GROMACS

    PubMed Central

    Klimovich, Pavel V.; Mobley, David L.

    2015-01-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper [14], recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge [16]. Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations. PMID:26487189

  6. The Fifth Calibration/Data Product Validation Panel Meeting

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The minutes and associated documents prepared from presentations and meetings at the Fifth Calibration/Data Product Validation Panel meeting in Boulder, Colorado, April 8 - 10, 1992, are presented. Key issues include (1) statistical characterization of data sets: finding statistics that characterize key attributes of the data sets, and defining ways to characterize the comparisons among data sets; (2) selection of specific intercomparison exercises: selecting characteristic spatial and temporal regions for intercomparisons, and impact of validation exercises on the logistics of current and planned field campaigns and model runs; and (3) preparation of data sets for intercomparisons: characterization of assumptions, transportable data formats, labeling data files, content of data sets, and data storage and distribution (EOSDIS interface).

  7. Validation of Fully Automated VMAT Plan Generation for Library-Based Plan-of-the-Day Cervical Cancer Radiotherapy

    PubMed Central

    Breedveld, Sebastiaan; Voet, Peter W. J.; Heijkoop, Sabrina T.; Mens, Jan-Willem M.; Hoogeman, Mischa S.; Heijmen, Ben J. M.

    2016-01-01

    Purpose To develop and validate fully automated generation of VMAT plan-libraries for plan-of-the-day adaptive radiotherapy in locally-advanced cervical cancer. Material and Methods Our framework for fully automated treatment plan generation (Erasmus-iCycle) was adapted to create dual-arc VMAT treatment plan libraries for cervical cancer patients. For each of 34 patients, automatically generated VMAT plans (autoVMAT) were compared to manually generated, clinically delivered 9-beam IMRT plans (CLINICAL), and to dual-arc VMAT plans generated manually by an expert planner (manVMAT). Furthermore, all plans were benchmarked against 20-beam equi-angular IMRT plans (autoIMRT). For all plans, a PTV coverage of 99.5% by at least 95% of the prescribed dose (46 Gy) had the highest planning priority, followed by minimization of V45Gy for small bowel (SB). Other OARs considered were bladder, rectum, and sigmoid. Results All plans had a highly similar PTV coverage, within the clinical constraints (above). After plan normalizations for exactly equal median PTV doses in corresponding plans, all evaluated OAR parameters in autoVMAT plans were on average lower than in the CLINICAL plans with an average reduction in SB V45Gy of 34.6% (p<0.001). For 41/44 autoVMAT plans, SB V45Gy was lower than for manVMAT (p<0.001, average reduction 30.3%), while SB V15Gy increased by 2.3% (p = 0.011). AutoIMRT reduced SB V45Gy by another 2.7% compared to autoVMAT, while also resulting in a 9.0% reduction in SB V15Gy (p<0.001), but with a prolonged delivery time. Differences between manVMAT and autoVMAT in bladder, rectal and sigmoid doses were ≤ 1%. Improvements in SB dose delivery with autoVMAT instead of manVMAT were higher for empty bladder PTVs compared to full bladder PTVs, due to differences in concavity of the PTVs. Conclusions Quality of automatically generated VMAT plans was superior to manually generated plans. Automatic VMAT plan generation for cervical cancer has been implemented in

  8. Validation of Fully Automated VMAT Plan Generation for Library-Based Plan-of-the-Day Cervical Cancer Radiotherapy.

    PubMed

    Sharfo, Abdul Wahab M; Breedveld, Sebastiaan; Voet, Peter W J; Heijkoop, Sabrina T; Mens, Jan-Willem M; Hoogeman, Mischa S; Heijmen, Ben J M

    2016-01-01

    To develop and validate fully automated generation of VMAT plan-libraries for plan-of-the-day adaptive radiotherapy in locally-advanced cervical cancer. Our framework for fully automated treatment plan generation (Erasmus-iCycle) was adapted to create dual-arc VMAT treatment plan libraries for cervical cancer patients. For each of 34 patients, automatically generated VMAT plans (autoVMAT) were compared to manually generated, clinically delivered 9-beam IMRT plans (CLINICAL), and to dual-arc VMAT plans generated manually by an expert planner (manVMAT). Furthermore, all plans were benchmarked against 20-beam equi-angular IMRT plans (autoIMRT). For all plans, a PTV coverage of 99.5% by at least 95% of the prescribed dose (46 Gy) had the highest planning priority, followed by minimization of V45Gy for small bowel (SB). Other OARs considered were bladder, rectum, and sigmoid. All plans had a highly similar PTV coverage, within the clinical constraints (above). After plan normalizations for exactly equal median PTV doses in corresponding plans, all evaluated OAR parameters in autoVMAT plans were on average lower than in the CLINICAL plans with an average reduction in SB V45Gy of 34.6% (p<0.001). For 41/44 autoVMAT plans, SB V45Gy was lower than for manVMAT (p<0.001, average reduction 30.3%), while SB V15Gy increased by 2.3% (p = 0.011). AutoIMRT reduced SB V45Gy by another 2.7% compared to autoVMAT, while also resulting in a 9.0% reduction in SB V15Gy (p<0.001), but with a prolonged delivery time. Differences between manVMAT and autoVMAT in bladder, rectal and sigmoid doses were ≤ 1%. Improvements in SB dose delivery with autoVMAT instead of manVMAT were higher for empty bladder PTVs compared to full bladder PTVs, due to differences in concavity of the PTVs. Quality of automatically generated VMAT plans was superior to manually generated plans. Automatic VMAT plan generation for cervical cancer has been implemented in our clinical routine. Due to the achieved workload

  9. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals

    PubMed Central

    Dicke, Theresa; Marsh, Herbert W.; Riley, Philip; Parker, Philip D.; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals (N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors. PMID:29760670

  10. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals.

    PubMed

    Dicke, Theresa; Marsh, Herbert W; Riley, Philip; Parker, Philip D; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals ( N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors.

  11. USAF bioenvironmental noise data handbook. Volume 161: A/M32A-86 generator set, diesel engine driven

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The A/M32A-86 generator set is a diesel engine driven source of electrical power used for the starting of aircraft, and for ground maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at normal rated/loaded conditions. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  12. A generalized land-use scenario generator: a case study for the Congo basin.

    NASA Astrophysics Data System (ADS)

    Caporaso, Luca; Tompkins, Adrian Mark; Biondi, Riccardo; Bell, Jean Pierre

    2014-05-01

    The impact of deforestation on climate is often studied using highly idealized "instant deforestation" experiments due to the lack of generalized deforestation scenario generators coupled to climate model land-surface schemes. A new deforestation scenario generator has been therefore developed to fulfill this role known as the deforestation ScenArio GEnerator, or FOREST-SAGE. The model produces distributed maps of deforestation rates that account for local factors such as proximity to transport networks, distance weighted population density, forest fragmentation and presence of protected areas and logging concessions. The integrated deforestation risk is scaled to give the deforestation rate as specified by macro-region scenarios such as "business as usual" or "increased protection legislation" which are a function of future time. FOREST-SAGE was initialized and validated using the MODerate Resolution Imaging Spectroradiometer Vegetation Continuous Field data. Despite the high cloud coverage of Congo Basin over the year, we were able to validate the results with high confidence from 2001 to 2010 in a large forested area. Furthermore a set of scenarios has been used to provide a range of possible pathways for the evolution of land-use change over the Congo Basin for the period 2010-2030.

  13. Validation of Nurse Practitioner Primary Care Organizational Climate Questionnaire: A New Tool to Study Nurse Practitioner Practice Settings.

    PubMed

    Poghosyan, Lusine; Chaplin, William F; Shaffer, Jonathan A

    2017-04-01

    Favorable organizational climate in primary care settings is necessary to expand the nurse practitioner (NP) workforce and promote their practice. Only one NP-specific tool, the Nurse Practitioner Primary Care Organizational Climate Questionnaire (NP-PCOCQ), measures NP organizational climate. We confirmed NP-PCOCQ's factor structure and established its predictive validity. A crosssectional survey design was used to collect data from 314 NPs in Massachusetts in 2012. Confirmatory factor analysis and regression models were used. The 4-factor model characterized NP-PCOCQ. The NP-PCOCQ score predicted job satisfaction (beta = .36; p < .001) and intent to leave job (odds ratio = .28; p = .011). NP-PCOCQ can be used by researchers to produce new evidence and by administrators to assess organizational climate in their clinics. Further testing of NP-PCOCQ is needed.

  14. An automated framework for hypotheses generation using literature.

    PubMed

    Abedi, Vida; Zand, Ramin; Yeasin, Mohammed; Faisal, Fazle Elahi

    2012-08-29

    In bio-medicine, exploratory studies and hypothesis generation often begin with researching existing literature to identify a set of factors and their association with diseases, phenotypes, or biological processes. Many scientists are overwhelmed by the sheer volume of literature on a disease when they plan to generate a new hypothesis or study a biological phenomenon. The situation is even worse for junior investigators who often find it difficult to formulate new hypotheses or, more importantly, corroborate if their hypothesis is consistent with existing literature. It is a daunting task to be abreast with so much being published and also remember all combinations of direct and indirect associations. Fortunately there is a growing trend of using literature mining and knowledge discovery tools in biomedical research. However, there is still a large gap between the huge amount of effort and resources invested in disease research and the little effort in harvesting the published knowledge. The proposed hypothesis generation framework (HGF) finds "crisp semantic associations" among entities of interest - that is a step towards bridging such gaps. The proposed HGF shares similar end goals like the SWAN but are more holistic in nature and was designed and implemented using scalable and efficient computational models of disease-disease interaction. The integration of mapping ontologies with latent semantic analysis is critical in capturing domain specific direct and indirect "crisp" associations, and making assertions about entities (such as disease X is associated with a set of factors Z). Pilot studies were performed using two diseases. A comparative analysis of the computed "associations" and "assertions" with curated expert knowledge was performed to validate the results. It was observed that the HGF is able to capture "crisp" direct and indirect associations, and provide knowledge discovery on demand. The proposed framework is fast, efficient, and robust in

  15. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  16. Standard Setting for Next Generation TOEFL Academic Speaking Test (TAST): Reflections on the ETS Panel of International Teaching Assistant Developers

    ERIC Educational Resources Information Center

    Papajohn, Dean

    2006-01-01

    While many institutions have utilized TOEFL scores for international admissions for many years, a speaking section has never before been a required part of TOEFL until the development of the iBT/Next Generation TOEFL. So institutions will need to determine how to set standards for the speaking section of TOEFL, also known as TOEFL Academic…

  17. Generative Topographic Mapping of Conformational Space.

    PubMed

    Horvath, Dragos; Baskin, Igor; Marcou, Gilles; Varnek, Alexandre

    2017-10-01

    Herein, Generative Topographic Mapping (GTM) was challenged to produce planar projections of the high-dimensional conformational space of complex molecules (the 1LE1 peptide). GTM is a probability-based mapping strategy, and its capacity to support property prediction models serves to objectively assess map quality (in terms of regression statistics). The properties to predict were total, non-bonded and contact energies, surface area and fingerprint darkness. Map building and selection was controlled by a previously introduced evolutionary strategy allowed to choose the best-suited conformational descriptors, options including classical terms and novel atom-centric autocorrellograms. The latter condensate interatomic distance patterns into descriptors of rather low dimensionality, yet precise enough to differentiate between close favorable contacts and atom clashes. A subset of 20 K conformers of the 1LE1 peptide, randomly selected from a pool of 2 M geometries (generated by the S4MPLE tool) was employed for map building and cross-validation of property regression models. The GTM build-up challenge reached robust three-fold cross-validated determination coefficients of Q 2 =0.7…0.8, for all modeled properties. Mapping of the full 2 M conformer set produced intuitive and information-rich property landscapes. Functional and folding subspaces appear as well-separated zones, even though RMSD with respect to the PDB structure was never used as a selection criterion of the maps. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    PubMed

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  19. Next-generation negative symptom assessment for clinical trials: validation of the Brief Negative Symptom Scale.

    PubMed

    Strauss, Gregory P; Keller, William R; Buchanan, Robert W; Gold, James M; Fischer, Bernard A; McMahon, Robert P; Catalano, Lauren T; Culbreth, Adam J; Carpenter, William T; Kirkpatrick, Brian

    2012-12-01

    The current study examined the psychometric properties of the Brief Negative Symptom Scale (BNSS), a next-generation rating instrument developed in response to the NIMH sponsored consensus development conference on negative symptoms. Participants included 100 individuals with a DSM-IV diagnosis of schizophrenia or schizoaffective disorder who completed a clinical interview designed to assess negative, positive, disorganized, and general psychiatric symptoms, as well as functional outcome. A battery of anhedonia questionnaires and neuropsychological tests were also administered. Results indicated that the BNSS has excellent internal consistency and temporal stability, as well as good convergent and discriminant validity in its relationships with other symptom rating scales, functional outcome, self-reported anhedonia, and neuropsychological test scores. Given its brevity (13-items, 15-minute interview) and good psychometric characteristics, the BNSS can be considered a promising new instrument for use in clinical trials. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. ICP-MS Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  1. An extended validation of the last generation of particle finite element method for free surface flows

    NASA Astrophysics Data System (ADS)

    Gimenez, Juan M.; González, Leo M.

    2015-03-01

    In this paper, a new generation of the particle method known as Particle Finite Element Method (PFEM), which combines convective particle movement and a fixed mesh resolution, is applied to free surface flows. This interesting variant, previously described in the literature as PFEM-2, is able to use larger time steps when compared to other similar numerical tools which implies shorter computational times while maintaining the accuracy of the computation. PFEM-2 has already been extended to free surface problems, being the main topic of this paper a deep validation of this methodology for a wider range of flows. To accomplish this task, different improved versions of discontinuous and continuous enriched basis functions for the pressure field have been developed to capture the free surface dynamics without artificial diffusion or undesired numerical effects when different density ratios are involved. A collection of problems has been carefully selected such that a wide variety of Froude numbers, density ratios and dominant dissipative cases are reported with the intention of presenting a general methodology, not restricted to a particular range of parameters, and capable of using large time-steps. The results of the different free-surface problems solved, which include: Rayleigh-Taylor instability, sloshing problems, viscous standing waves and the dam break problem, are compared to well validated numerical alternatives or experimental measurements obtaining accurate approximations for such complex flows.

  2. The Parent Trauma Response Questionnaire (PTRQ): development and preliminary validation.

    PubMed

    Williamson, Victoria; Hiller, Rachel M; Meiser-Stedman, Richard; Creswell, Cathy; Dalgleish, Tim; Fearon, Pasco; Goodall, Ben; McKinnon, Anna; Smith, Patrick; Wright, Isobel; Halligan, Sarah L

    2018-01-01

    Background : Following a child's experience of trauma, parental response is thought to play an important role in either facilitating or hindering their psychological adjustment. However, the ability to investigate the role of parenting responses in the post-trauma period has been hampered by a lack of valid and reliable measures. Objectives : The aim of this study was to design, and provide a preliminary validation of, the Parent Trauma Response Questionnaire (PTRQ), a self-report measure of parental appraisals and support for children's coping, in the aftermath of child trauma. Methods : We administered an initial set of 78 items to 365 parents whose children, aged 2-19 years, had experienced a traumatic event. We conducted principal axis factoring and then assessed the validity of the reduced measure against a standardized general measure of parental overprotection and via the measure's association with child post-trauma mental health. Results : Factor analysis generated three factors assessing parental maladaptive appraisals: (i) permanent change/damage, (ii) preoccupation with child's vulnerability, and (iii) self-blame. In addition, five factors were identified that assess parental support for child coping: (i) behavioural avoidance, (ii) cognitive avoidance, (iii) overprotection, (iv) maintaining pre-trauma routines, and (v) approach coping. Good validity was evidenced against the measure of parental overprotection and child post-traumatic stress symptoms. Good test-retest reliability of the measure was also demonstrated. Conclusions : The PTRQ is a valid and reliable self-report assessment of parenting cognitions and coping in the aftermath of child trauma.

  3. Calculation of Coupled Vibroacoustics Response Estimates from a Library of Available Uncoupled Transfer Function Sets

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett

    2012-01-01

    The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.

  4. Instrument development and validation of a quality scale for historical research papers (QSHRP): a pilot study.

    PubMed

    Kelly, Jacinta; Watson, Roger

    2014-12-01

    To report a pilot study for the development and validation of an instrument to measure quality in historical research papers. There are no set criteria to assess historical papers published in nursing journals. A three phase mixed method sequential confirmatory design. In 2012, we used a three-phase approach to item generation and content evaluation. In phase 1, we consulted nursing historians using an online survey comprising three open-ended questions and revised the items. In phase 2, we evaluated the revised items for relevance with expert historians using a 4-point Likert scale and Content Validity Index calculation. In phase 3, we conducted reliability testing of the instrument using a 3-point Likert scale. In phase 1, 121 responses were generated via the online survey and revised to 40 interrogatively phrased items. In phase 2, five items with an Item Content Validity Index score of ≥0·7 remained. In phase 3, responses from historians resulted in 100% agreement to questions 1, 2 and 4 and 89% and 78%, respectively, to questions 3 and 5. Items for the QSHRP have been identified, content validated and reliability tested. This scale improves on previous scales, which over-emphasized source criticism. However, a full-scale study is needed with nursing historians to increase its robustness. © 2014 John Wiley & Sons Ltd.

  5. GeneTopics - interpretation of gene sets via literature-driven topic models

    PubMed Central

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly

  6. Single case design studies in music therapy: resurrecting experimental evidence in small group and individual music therapy clinical settings.

    PubMed

    Geist, Kamile; Hitchcock, John H

    2014-01-01

    The profession would benefit from greater and routine generation of causal evidence pertaining to the impact of music therapy interventions on client outcomes. One way to meet this goal is to revisit the use of Single Case Designs (SCDs) in clinical practice and research endeavors in music therapy. Given the appropriate setting and goals, this design can be accomplished with small sample sizes and it is often appropriate for studying music therapy interventions. In this article, we promote and discuss implementation of SCD studies in music therapy settings, review the meaning of internal study validity and by extension the notion of causality, and describe two of the most commonly used SCDs to demonstrate how they can help generate causal evidence to inform the field. In closing, we describe the need for replication and future meta-analysis of SCD studies completed in music therapy settings. SCD studies are both feasible and appropriate for use in music therapy clinical practice settings, particularly for testing effectiveness of interventions for individuals or small groups. © the American Music Therapy Association 2014. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. ICP-AES Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  8. Trace Volatile Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  9. The accuracy of SST retrievals from AATSR: An initial assessment through geophysical validation against in situ radiometers, buoys and other SST data sets

    NASA Astrophysics Data System (ADS)

    Corlett, G. K.; Barton, I. J.; Donlon, C. J.; Edwards, M. C.; Good, S. A.; Horrocks, L. A.; Llewellyn-Jones, D. T.; Merchant, C. J.; Minnett, P. J.; Nightingale, T. J.; Noyes, E. J.; O'Carroll, A. G.; Remedios, J. J.; Robinson, I. S.; Saunders, R. W.; Watts, J. G.

    The Advanced Along-Track Scanning Radiometer (AATSR) was launched on Envisat in March 2002. The AATSR instrument is designed to retrieve precise and accurate global sea surface temperature (SST) that, combined with the large data set collected from its predecessors, ATSR and ATSR-2, will provide a long term record of SST data that is greater than 15 years. This record can be used for independent monitoring and detection of climate change. The AATSR validation programme has successfully completed its initial phase. The programme involves validation of the AATSR derived SST values using in situ radiometers, in situ buoys and global SST fields from other data sets. The results of the initial programme presented here will demonstrate that the AATSR instrument is currently close to meeting its scientific objectives of determining global SST to an accuracy of 0.3 K (one sigma). For night time data, the analysis gives a warm bias of between +0.04 K (0.28 K) for buoys to +0.06 K (0.20 K) for radiometers, with slightly higher errors observed for day time data, showing warm biases of between +0.02 (0.39 K) for buoys to +0.11 K (0.33 K) for radiometers. They show that the ATSR series of instruments continues to be the world leader in delivering accurate space-based observations of SST, which is a key climate parameter.

  10. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical

  11. Understanding pregnancy planning in a low-income country setting: validation of the London measure of unplanned pregnancy in Malawi.

    PubMed

    Hall, Jennifer; Barrett, Geraldine; Mbwana, Nicholas; Copas, Andrew; Malata, Address; Stephenson, Judith

    2013-11-05

    The London Measure of Unplanned Pregnancy (LMUP) is a new and psychometrically valid measure of pregnancy intention that was developed in the United Kingdom. An improved understanding of pregnancy intention in low-income countries, where unintended pregnancies are common and maternal and neonatal deaths are high, is necessary to inform policies to address the unmet need for family planning. To this end this research aimed to validate the LMUP for use in the Chichewa language in Malawi. Three Chichewa speakers translated the LMUP and one translation was agreed which was back-translated and pre-tested on five pregnant women using cognitive interviews. The measure was field tested with pregnant women who were recruited at antenatal clinics and data were analysed using classical test theory and hypothesis testing. 125 women aged 15-43 (median 23), with parities of 1-8 (median 2) completed the Chichewa LMUP. There were no missing data. The full range of LMUP scores was captured. In terms of reliability, the scale was internally consistent (Cronbach's alpha = 0.78) and test-retest data from 70 women showed good stability (weighted Kappa 0.80). In terms of validity, hypothesis testing confirmed that unmarried women (p = 0.003), women who had four or more children alive (p = 0.0051) and women who were below 20 or over 29 (p = 0.0115) were all more likely to have unintended pregnancies. Principal component analysis showed that five of the six items loaded onto one factor, with a further item borderline. A sensitivity analysis to assess the effect of the removal of the weakest item of the scale showed slightly improved performance but as the LMUP was not significantly adversely affected by its inclusion we recommend retaining the six-item score. The Chichewa LMUP is a valid and reliable measure of pregnancy intention in Malawi and can now be used in research and/or surveillance. This is the first validation of this tool in a low-income country, helping to

  12. Concordance and predictive value of two adverse drug event data sets.

    PubMed

    Cami, Aurel; Reis, Ben Y

    2014-08-22

    Accurate prediction of adverse drug events (ADEs) is an important means of controlling and reducing drug-related morbidity and mortality. Since no single "gold standard" ADE data set exists, a range of different drug safety data sets are currently used for developing ADE prediction models. There is a critical need to assess the degree of concordance between these various ADE data sets and to validate ADE prediction models against multiple reference standards. We systematically evaluated the concordance of two widely used ADE data sets - Lexi-comp from 2010 and SIDER from 2012. The strength of the association between ADE (drug) counts in Lexi-comp and SIDER was assessed using Spearman rank correlation, while the differences between the two data sets were characterized in terms of drug categories, ADE categories and ADE frequencies. We also performed a comparative validation of the Predictive Pharmacosafety Networks (PPN) model using both ADE data sets. The predictive power of PPN using each of the two validation sets was assessed using the area under Receiver Operating Characteristic curve (AUROC). The correlations between the counts of ADEs and drugs in the two data sets were 0.84 (95% CI: 0.82-0.86) and 0.92 (95% CI: 0.91-0.93), respectively. Relative to an earlier snapshot of Lexi-comp from 2005, Lexi-comp 2010 and SIDER 2012 introduced a mean of 1,973 and 4,810 new drug-ADE associations per year, respectively. The difference between these two data sets was most pronounced for Nervous System and Anti-infective drugs, Gastrointestinal and Nervous System ADEs, and postmarketing ADEs. A minor difference of 1.1% was found in the AUROC of PPN when SIDER 2012 was used for validation instead of Lexi-comp 2010. In conclusion, the ADE and drug counts in Lexi-comp and SIDER data sets were highly correlated and the choice of validation set did not greatly affect the overall prediction performance of PPN. Our results also suggest that it is important to be aware of the

  13. Development and Validation of a Measure of Quality of Life for the Young Elderly in Sri Lanka.

    PubMed

    de Silva, Sudirikku Hennadige Padmal; Jayasuriya, Anura Rohan; Rajapaksa, Lalini Chandika; de Silva, Ambepitiyawaduge Pubudu; Barraclough, Simon

    2016-01-01

    Sri Lanka has one of the fastest aging populations in the world. Measurement of quality of life (QoL) in the elderly needs instruments developed that encompass the sociocultural settings. An instrument was developed to measure QoL in the young elderly in Sri Lanka (QLI-YES), using accepted methods to generate and reduce items. The measure was validated using a community sample. Construct, criterion and predictive validity and reliability were tested. A first-order model of 24 items with 6 domains was found to have good fit indices (CMIN/df = 1.567, RMR = 0.05, CFI = 0.95, and RMSEA = 0.053). Both criterion and predictive validity were demonstrated. Good internal consistency reliability (Cronbach's α = 0.93) was shown. The development of the QLI-YES using a societal perspective relevant to the social and cultural beliefs has resulted in a robust and valid instrument to measure QoL for the young elderly in Sri Lanka. © 2015 APJPH.

  14. Development and Validation of a Measure of Quality of Life for the Young Elderly in Sri Lanka

    PubMed Central

    de Silva, Sudirikku Hennadige Padmal; Jayasuriya, Anura Rohan; Rajapaksa, Lalini Chandika; de Silva, Ambepitiyawaduge Pubudu; Barraclough, Simon

    2016-01-01

    Sri Lanka has one of the fastest aging populations in the world. Measurement of quality of life (QoL) in the elderly needs instruments developed that encompass the sociocultural settings. An instrument was developed to measure QoL in the young elderly in Sri Lanka (QLI-YES), using accepted methods to generate and reduce items. The measure was validated using a community sample. Construct, criterion and predictive validity and reliability were tested. A first-order model of 24 items with 6 domains was found to have good fit indices (CMIN/df = 1.567, RMR = 0.05, CFI = 0.95, and RMSEA = 0.053). Both criterion and predictive validity were demonstrated. Good internal consistency reliability (Cronbach’s α = 0.93) was shown. The development of the QLI-YES using a societal perspective relevant to the social and cultural beliefs has resulted in a robust and valid instrument to measure QoL for the young elderly in Sri Lanka. PMID:26712893

  15. Validation of SMAP surface soil moisture products with core validation sites

    USDA-ARS?s Scientific Manuscript database

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well-calibrated in situ soil moisture measurements within SMAP product grid pixels for diver...

  16. Overview of calibration and validation activities for the EUMETSAT polar system: second generation (EPS-SG) visible/infrared imager (METimage)

    NASA Astrophysics Data System (ADS)

    Phillips, P.; Bonsignori, R.; Schlüssel, P.; Schmülling, F.; Spezzi, L.; Watts, P.; Zerfowski, I.

    2016-10-01

    The EPS-SG Visible/Infrared Imaging (VII) mission is dedicated to supporting the optical imagery user needs for Numerical Weather Prediction (NWP), Nowcasting (NWC) and climate in the timeframe beyond 2020. The VII mission is fulfilled by the METimage instrument, developed by the German Space Agency (DLR) and funded by the German government and EUMETSAT. Following on from an important list of predecessors such as the Advanced Very High Resolution Radiometer (AVHRR) and the Moderate resolution Imaging Spectro-radiometer (MODIS), METimage will fly in the mid-morning orbit of the Joint Polar System, whilst the early-afternoon orbits are served by the JPSS (U.S. Joint Polar Satellite System) Visible Infrared Imager Radiometer Suite (VIIRS). METimage itself is a cross-purpose medium resolution, multi-spectral optical imager, measuring the optical spectrum of radiation emitted and reflected by the Earth from a low-altitude sun synchronous orbit over a minimum swath width of 2700 km. The top of the atmosphere outgoing radiance will be sampled every 500 m (at nadir) with measurements made in 20 spectral channels ranging from 443 nm in the visible up to 13.345 μm in the thermal infrared. The three major objectives of the EPS-SG METimage calibration and validation activities are: • Verification of the instrument performances through continuous in-flight calibration and characterisation, including monitoring of long term stability. • Provision of validated level 1 and level 2 METimage products. • Revision of product processing facilities, i.e. algorithms and auxiliary data sets, to assure that products conform with user requirements, and then, if possible, exceed user expectations. This paper will describe the overall Calibration and Validation (Cal/Val) logic and the methods adopted to ensure that the METimage data products meet performance specifications for the lifetime of the mission. Such methods include inter-comparisons with other missions through simultaneous

  17. Validation and detection of vessel landmarks by using anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Bernhardt, Dominik; Biermann, Christina; Dillmann, Rüdiger

    2010-03-01

    The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically. Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level anatomical knowledge to improve these classification results. We propose a new approach to validate candidates for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system, mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets has been tested in combination with a state of the art landmark detector with excellent results. Beside the carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior mesenteric, renal, aortic, iliac and femoral bifurcation.

  18. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    NASA Astrophysics Data System (ADS)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  19. Validation and Application of a PCR Primer Set to Quantify Fungal Communities in the Soil Environment by Real-Time Quantitative PCR

    PubMed Central

    Chemidlin Prévost-Bouré, Nicolas; Christen, Richard; Dequiedt, Samuel; Mougel, Christophe; Lelièvre, Mélanie; Jolivet, Claudy; Shahbazkia, Hamid Reza; Guillou, Laure; Arrouays, Dominique; Ranjard, Lionel

    2011-01-01

    Fungi constitute an important group in soil biological diversity and functioning. However, characterization and knowledge of fungal communities is hampered because few primer sets are available to quantify fungal abundance by real-time quantitative PCR (real-time Q-PCR). The aim in this study was to quantify fungal abundance in soils by incorporating, into a real-time Q-PCR using the SYBRGreen® method, a primer set already used to study the genetic structure of soil fungal communities. To satisfy the real-time Q-PCR requirements to enhance the accuracy and reproducibility of the detection technique, this study focused on the 18S rRNA gene conserved regions. These regions are little affected by length polymorphism and may provide sufficiently small targets, a crucial criterion for enhancing accuracy and reproducibility of the detection technique. An in silico analysis of 33 primer sets targeting the 18S rRNA gene was performed to select the primer set with the best potential for real-time Q-PCR: short amplicon length; good fungal specificity and coverage. The best consensus between specificity, coverage and amplicon length among the 33 sets tested was the primer set FR1 / FF390. This in silico analysis of the specificity of FR1 / FF390 also provided additional information to the previously published analysis on this primer set. The specificity of the primer set FR1 / FF390 for Fungi was validated in vitro by cloning - sequencing the amplicons obtained from a real time Q-PCR assay performed on five independent soil samples. This assay was also used to evaluate the sensitivity and reproducibility of the method. Finally, fungal abundance in samples from 24 soils with contrasting physico-chemical and environmental characteristics was examined and ranked to determine the importance of soil texture, organic carbon content, C∶N ratio and land use in determining fungal abundance in soils. PMID:21931659

  20. Gene Ontology synonym generation rules lead to increased performance in biomedical concept recognition.

    PubMed

    Funk, Christopher S; Cohen, K Bretonnel; Hunter, Lawrence E; Verspoor, Karin M

    2016-09-09

    Gene Ontology (GO) terms represent the standard for annotation and representation of molecular functions, biological processes and cellular compartments, but a large gap exists between the way concepts are represented in the ontology and how they are expressed in natural language text. The construction of highly specific GO terms is formulaic, consisting of parts and pieces from more simple terms. We present two different types of manually generated rules to help capture the variation of how GO terms can appear in natural language text. The first set of rules takes into account the compositional nature of GO and recursively decomposes the terms into their smallest constituent parts. The second set of rules generates derivational variations of these smaller terms and compositionally combines all generated variants to form the original term. By applying both types of rules, new synonyms are generated for two-thirds of all GO terms and an increase in F-measure performance for recognition of GO on the CRAFT corpus from 0.498 to 0.636 is observed. Additionally, we evaluated the combination of both types of rules over one million full text documents from Elsevier; manual validation and error analysis show we are able to recognize GO concepts with reasonable accuracy (88 %) based on random sampling of annotations. In this work we present a set of simple synonym generation rules that utilize the highly compositional and formulaic nature of the Gene Ontology concepts. We illustrate how the generated synonyms aid in improving recognition of GO concepts on two different biomedical corpora. We discuss other applications of our rules for GO ontology quality assurance, explore the issue of overgeneration, and provide examples of how similar methodologies could be applied to other biomedical terminologies. Additionally, we provide all generated synonyms for use by the text-mining community.

  1. Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases

    NASA Technical Reports Server (NTRS)

    Morin, Bruce L.

    2010-01-01

    Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.

  2. Validation and Application of Models to Predict Facemask Influenza Contamination in Healthcare Settings

    PubMed Central

    Fisher, Edward M.; Noti, John D.; Lindsley, William G.; Blachere, Francoise M.; Shaffer, Ronald E.

    2015-01-01

    Facemasks are part of the hierarchy of interventions used to reduce the transmission of respiratory pathogens by providing a barrier. Two types of facemasks used by healthcare workers are N95 filtering facepiece respirators (FFRs) and surgical masks (SMs). These can become contaminated with respiratory pathogens during use, thus serving as potential sources for transmission. However, because of the lack of field studies, the hazard associated with pathogen-exposed facemasks is unknown. A mathematical model was used to calculate the potential influenza contamination of facemasks from aerosol sources in various exposure scenarios. The aerosol model was validated with data from previous laboratory studies using facemasks mounted on headforms in a simulated healthcare room. The model was then used to estimate facemask contamination levels in three scenarios generated with input parameters from the literature. A second model estimated facemask contamination from a cough. It was determined that contamination levels from a single cough (≈19 viruses) were much less than likely levels from aerosols (4,473 viruses on FFRs and 3,476 viruses on SMs). For aerosol contamination, a range of input values from the literature resulted in wide variation in estimated facemask contamination levels (13–202,549 viruses), depending on the values selected. Overall, these models and estimates for facemask contamination levels can be used to inform infection control practice and research related to the development of better facemasks, to characterize airborne contamination levels, and to assist in assessment of risk from reaerosolization and fomite transfer because of handling and reuse of contaminated facemasks. PMID:24593662

  3. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data

    PubMed Central

    2010-01-01

    Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well

  4. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data.

    PubMed

    Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude

    2010-01-26

    In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of

  5. Validity of photographs for food portion estimation in a rural West African setting.

    PubMed

    Huybregts, L; Roberfroid, D; Lachat, C; Van Camp, J; Kolsteren, P

    2008-06-01

    To validate food photographs for food portion size estimation of frequently consumed dishes, to be used in a 24-hour recall food consumption study of pregnant women in a rural environment in Burkina Faso. This food intake study is part of an intervention evaluating the efficacy of prenatal micronutrient supplementation on birth outcomes. Women of childbearing age (15-45 years). A food photograph album containing four photographs of food portions per food item was compiled for eight selected food items. Subjects were presented two food items each in the morning and two in the afternoon. These foods were weighed to the exact weight of a food depicted in one of the photographs and were in the same receptacles. The next day another fieldworker presented the food photographs to the subjects to test their ability to choose the correct photograph. The correct photograph out of the four proposed was chosen in 55% of 1028 estimations. For each food, proportions of underestimating and overestimating participants were balanced, except for rice and couscous. On a group level, mean differences between served and estimated portion sizes were between -8.4% and 6.3%. Subjects who attended school were almost twice as likely to choose the correct photograph. The portion size served (small vs. largest sizes) had a significant influence on the portion estimation ability. The results from this study indicate that in a West African rural setting, food photographs can be a valuable tool for the quantification of food portion size on group level.

  6. Characterizing the literature on validity and assessment in medical education: a bibliometric study.

    PubMed

    Young, Meredith; St-Onge, Christina; Xiao, Jing; Vachon Lachiver, Elise; Torabi, Nazi

    2018-05-23

    Assessment in Medical Education fills many roles and is under constant scrutiny. Assessments must be of good quality, and supported by validity evidence. Given the high-stakes consequences of assessment, and the many audiences within medical education (e. g., training level, specialty-specific), we set out to document the breadth, scope, and characteristics of the literature reporting on validation of assessments within medical education. Searches in Medline (Ovid), Web of Science, ERIC, EMBASE (Ovid), and PsycINFO (Ovid) identified articles reporting on assessment of learners in medical education published since 1999. Included articles were coded for geographic origin, journal, journal category, targeted assessment, and authors. A map of collaborations between prolific authors was generated. A total of 2,863 articles were included. The majority of articles were from the United States, with Canada producing the most articles per medical school. Most articles were published in journals with medical categorizations (73.1% of articles), but Medical Education was the most represented journal (7.4% of articles). Articles reported on a variety of assessment tools and approaches, and 89 prolific authors were identified, with a total of 228 collaborative links. Literature reporting on validation of assessments in medical education is heterogeneous. Literature is produced by a broad array of authors and collaborative networks, reported to a broad audience, and is primarily generated in North American and European contexts. Our findings speak to the heterogeneity of the medical education literature on assessment validation, and suggest that this heterogeneity may stem, at least in part, from differences in constructs measured, assessment purposes, or conceptualizations of validity.

  7. Use of the Ames Check Standard Model for the Validation of Wall Interference Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Amaya, M.; Flach, R.

    2018-01-01

    The new check standard model of the NASA Ames 11-ft Transonic Wind Tunnel was chosen for a future validation of the facility's wall interference correction system. The chosen validation approach takes advantage of the fact that test conditions experienced by a large model in the slotted part of the tunnel's test section will change significantly if a subset of the slots is temporarily sealed. Therefore, the model's aerodynamic coefficients have to be recorded, corrected, and compared for two different test section configurations in order to perform the validation. Test section configurations with highly accurate Mach number and dynamic pressure calibrations were selected for the validation. First, the model is tested with all test section slots in open configuration while keeping the model's center of rotation on the tunnel centerline. In the next step, slots on the test section floor are sealed and the model is moved to a new center of rotation that is 33 inches below the tunnel centerline. Then, the original angle of attack sweeps are repeated. Afterwards, wall interference corrections are applied to both test data sets and response surface models of the resulting aerodynamic coefficients in interference-free flow are generated. Finally, the response surface models are used to predict the aerodynamic coefficients for a family of angles of attack while keeping dynamic pressure, Mach number, and Reynolds number constant. The validation is considered successful if the corrected aerodynamic coefficients obtained from the related response surface model pair show good agreement. Residual differences between the corrected coefficient sets will be analyzed as well because they are an indicator of the overall accuracy of the facility's wall interference correction process.

  8. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Development and Validation of a Polarimetric-MCScene 3D Atmospheric Radiation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berk, Alexander; Hawes, Frederick; Fox, Marsha

    2016-03-15

    Polarimetric measurements can substantially enhance the ability of both spectrally resolved and single band imagery to detect the proliferation of weapons of mass destruction, providing data for locating and identifying facilities, materials, and processes of undeclared and proliferant nuclear weapons programs worldwide. Unfortunately, models do not exist that efficiently and accurately predict spectral polarized signatures for the materials of interest embedded in complex 3D environments. Having such a model would enable one to test hypotheses and optimize both the enhancement of scene contrast and the signal processing for spectral signature extraction. The Phase I set the groundwork for development ofmore » fully validated polarimetric spectral signature and scene simulation models. This has been accomplished 1. by (a) identifying and downloading state-of-the-art surface and atmospheric polarimetric data sources, (b) implementing tools for generating custom polarimetric data, and (c) identifying and requesting US Government funded field measurement data for use in validation; 2. by formulating an approach for upgrading the radiometric spectral signature model MODTRAN to generate polarimetric intensities through (a) ingestion of the polarimetric data, (b) polarimetric vectorization of existing MODTRAN modules, and (c) integration of a newly developed algorithm for computing polarimetric multiple scattering contributions; 3. by generating an initial polarimetric model that demonstrates calculation of polarimetric solar and lunar single scatter intensities arising from the interaction of incoming irradiances with molecules and aerosols; 4. by developing a design and implementation plan to (a) automate polarimetric scene construction and (b) efficiently sample polarimetric scattering and reflection events, for use in a to be developed polarimetric version of the existing first-principles synthetic scene simulation model, MCScene; and 5. by planning a validation

  10. The measurement of collaboration within healthcare settings: a systematic review of measurement properties of instruments.

    PubMed

    Walters, Stephen John; Stern, Cindy; Robertson-Malt, Suzanne

    2016-04-01

    generated consisting of nine headings: organizational settings, support structures, purpose and goals; communication; reflection on process; cooperation; coordination; role interdependence and partnership; relationships; newly created professional activities; and professional flexibility. Among the many instruments that measure collaboration within healthcare settings, the quality of each instrument varies; instruments are designed for specific populations and purposes, and are validated in various settings. Selecting an instrument requires careful consideration of the qualities of each. Therefore, referring to systematic reviews of measurement properties of instruments may be helpful to clinicians or researchers in instrument selection. Systematic reviews of measurement properties of instruments are valuable in aiding in instrument selection. This systematic review may be useful in instrument selection for the measurement of collaboration within healthcare settings with a complex mix of participant types. Evaluating collaboration provides important information on the strengths and limitations of different healthcare settings and the opportunities for continuous improvement via any remedial actions initiated. Development of a tool that can be used to measure collaboration within teams of healthcare professionals and non-professionals is important for practice. The use of different statistical modelling techniques, such as Item Response Theory modelling and the translation of models into Computer Adaptive Tests, may prove useful. Measurement equivalence is an important consideration for future instrument development and validation. Further development of the COSMIN tool should include appraisal for measurement equivalence. Researchers developing and validating measurement tools should consider multi-method research designs.

  11. Validation of the AVM Blast Computational Modeling and Simulation Tool Set

    DTIC Science & Technology

    2015-08-04

    by-construction" methodology is powerful and would not be possible without high -level design languages to support validation and verification. [1,4...to enable the making of informed design decisions.  Enable rapid exploration of the design trade-space for high -fidelity requirements tradeoffs...live-fire tests, the jump height of the target structure is recorded by using either high speed cameras or a string pot. A simple projectile motion

  12. Military Standard Generators Prototype Modifications. Volume 2. 30 kW DoD Generator Set

    DTIC Science & Technology

    1988-03-31

    GE14 SET S,./14 KZO 5EJ41 70 KM,,*68 ý0 GEN S¢ET SIRN tɘ 5841 C 1 EXHAUST ± 484.71 DEG. F C I EXHAUST ± 545. 67 DEG. F C 2 EXHAUST 2 473.2-9 DEG. F...584t ý󈧣 Kid.- N Z GE14 SET SoN K20 584 ±i EXHAUST ± 85.64 bEG. F C. EXHAUST 1 062.12 DEG. F C 2 EXHAUSIT 2 75854 PEG.F C 2 EXHAUST 2 7M2± DEG. F c 7...E: 2ŗ GE14 . UVLT. P.RG. 78. 165 DEIG. F AL". C 23 GEN. UDLT. RIG. 69..945 DEG. F AL? C: 24 r:D[4TROL PANEL 68S.7 35% DEG. F ALM! C 24 CO4TRTOL PANEL

  13. Educating the Next Generation of Geoscientists: Strategies for Formal and Informal Settings

    NASA Astrophysics Data System (ADS)

    Burrell, S.

    2013-12-01

    ENGAGE, Educating the Next Generation of Geoscientists, is an effort funded by the National Science Foundation to provide academic opportunities for members of underrepresented groups to learn geology in formal and informal settings through collaboration with other universities and science organizations. The program design tests the hypothesis that developing a culture of on-going dialogue around science issues through special guest lectures and workshops, creating opportunities for mentorship through informal lunches, incorporating experiential learning in the field into the geoscience curriculum in lower division courses, partnership-building through the provision of paid summer internships and research opportunities, enabling students to participate in professional conferences, and engaging family members in science education through family science nights and special presentations, will remove the academic, social and economic obstacles that have traditionally hindered members of underrepresented groups from participation in the geosciences and will result in an increase in geoscience literacy and enrollment. Student feedback and anecdotal evidence indicate an increased interest in geology as a course of study and increased awareness of the relevance of geology everyday life. Preliminary statistics from two years of program implementation indicate increased student comprehension of Earth science concepts and ability to use data to identify trends in the natural environment.

  14. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional

  15. Response set bias, internal consistency and construct validity of the Oswestry Low Back Pain Disability Questionnaire

    PubMed Central

    Tibbles, Anthony C; Waalen, Judith K; Hains, François C

    1998-01-01

    Background: The Oswestry Low Back Pain Disability Questionnaire (ODQ) is a widely used 10-item paper and pencil measure of disability resulting from low back pain. However, few studies have assessed the psychometric properties of the instrument. This study evaluated the response set bias, the internal consistency, and the construct validity of the ODQ. Objectives: The original ODQ was compared to seven modified versions to examine whether a response set bias existed. The internal consistency of the ODQ was assessed using the Cronbach alpha. Finally, the relationship between scores on the ODQ and the Roland Morris Functional Disability Scal (RM) was examined. Methods: Seven modified versions of the ODQ were developed from the original. One of the eight versions was randomly allocated to 102 adult patients presenting with low lack pain. There was no attempt to select patients on the basis of pain intensity or prior treatment so as to maximize the range and diversity of low back pain sufferers. Results: Results suggest that the responses given on the eight versions of the ODQ are a function of content and not of the format in which the items are presented. The ODQ also has strong internal consisstency (alpha = 0.85) and is strongly correlated to the RM (r = .70, p = .0005). The ODQ is a significant predictor of the RM scores (T-9.45, p = .0005) and duration of symptoms (T = -2.17, p = .0325). Conclusion: The ODQ appears to possess stable psychometric properties. The use of more than one version provides practitioners with a means of repeatedly assessing the disability levels of patients suffering from low back pain over the course of treatment.

  16. Next-generation sequencing meets genetic diagnostics: development of a comprehensive workflow for the analysis of BRCA1 and BRCA2 genes

    PubMed Central

    Feliubadaló, Lídia; Lopez-Doriga, Adriana; Castellsagué, Ester; del Valle, Jesús; Menéndez, Mireia; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Gómez, Carolina; Campos, Olga; Pineda, Marta; González, Sara; Moreno, Victor; Brunet, Joan; Blanco, Ignacio; Serra, Eduard; Capellá, Gabriel; Lázaro, Conxi

    2013-01-01

    Next-generation sequencing (NGS) is changing genetic diagnosis due to its huge sequencing capacity and cost-effectiveness. The aim of this study was to develop an NGS-based workflow for routine diagnostics for hereditary breast and ovarian cancer syndrome (HBOCS), to improve genetic testing for BRCA1 and BRCA2. A NGS-based workflow was designed using BRCA MASTR kit amplicon libraries followed by GS Junior pyrosequencing. Data analysis combined Variant Identification Pipeline freely available software and ad hoc R scripts, including a cascade of filters to generate coverage and variant calling reports. A BRCA homopolymer assay was performed in parallel. A research scheme was designed in two parts. A Training Set of 28 DNA samples containing 23 unique pathogenic mutations and 213 other variants (33 unique) was used. The workflow was validated in a set of 14 samples from HBOCS families in parallel with the current diagnostic workflow (Validation Set). The NGS-based workflow developed permitted the identification of all pathogenic mutations and genetic variants, including those located in or close to homopolymers. The use of NGS for detecting copy-number alterations was also investigated. The workflow meets the sensitivity and specificity requirements for the genetic diagnosis of HBOCS and improves on the cost-effectiveness of current approaches. PMID:23249957

  17. Development and Validation of the Physics Anxiety Rating Scale

    ERIC Educational Resources Information Center

    Sahin, Mehmet; Caliskan, Serap; Dilek, Ufuk

    2015-01-01

    This study reports the development and validation process for an instrument to measure university students' anxiety in physics courses. The development of the Physics Anxiety Rating Scale (PARS) included the following steps: Generation of scale items, content validation, construct validation, and reliability calculation. The results of construct…

  18. Navigating highly homologous genes in a molecular diagnostic setting: a resource for clinical next-generation sequencing.

    PubMed

    Mandelker, Diana; Schmidt, Ryan J; Ankala, Arunkanth; McDonald Gibson, Kristin; Bowser, Mark; Sharma, Himanshu; Duffy, Elizabeth; Hegde, Madhuri; Santani, Avni; Lebo, Matthew; Funke, Birgit

    2016-12-01

    Next-generation sequencing (NGS) is now routinely used to interrogate large sets of genes in a diagnostic setting. Regions of high sequence homology continue to be a major challenge for short-read technologies and can lead to false-positive and false-negative diagnostic errors. At the scale of whole-exome sequencing (WES), laboratories may be limited in their knowledge of genes and regions that pose technical hurdles due to high homology. We have created an exome-wide resource that catalogs highly homologous regions that is tailored toward diagnostic applications. This resource was developed using a mappability-based approach tailored to current Sanger and NGS protocols. Gene-level and exon-level lists delineate regions that are difficult or impossible to analyze via standard NGS. These regions are ranked by degree of affectedness, annotated for medical relevance, and classified by the type of homology (within-gene, different functional gene, known pseudogene, uncharacterized noncoding region). Additionally, we provide a list of exons that cannot be analyzed by short-amplicon Sanger sequencing. This resource can help guide clinical test design, supplemental assay implementation, and results interpretation in the context of high homology.Genet Med 18 12, 1282-1289.

  19. When is the Anelastic Approximation a Valid Model for Compressible Convection?

    NASA Astrophysics Data System (ADS)

    Alboussiere, T.; Curbelo, J.; Labrosse, S.; Ricard, Y. R.; Dubuffet, F.

    2017-12-01

    Compressible convection is ubiquitous in large natural systems such Planetary atmospheres, stellar and planetary interiors. Its modelling is notoriously more difficult than the case when the Boussinesq approximation applies. One reason for that difficulty has been put forward by Ogura and Phillips (1961): the compressible equations generate sound waves with very short time scales which need to be resolved. This is why they introduced an anelastic model, based on an expansion of the solution around an isentropic hydrostatic profile. How accurate is that anelastic model? What are the conditions for its validity? To answer these questions, we have developed a numerical model for the full set of compressible equations and compared its solutions with those of the corresponding anelastic model. We considered a simple rectangular 2D Rayleigh-Bénard configuration and decided to restrict the analysis to infinite Prandtl numbers. This choice is valid for convection in the mantles of rocky planets, but more importantly lead to a zero Mach number. So we got rid of the question of the interference of acoustic waves with convection. In that simplified context, we used the entropy balances (that of the full set of equations and that of the anelastic model) to investigate the differences between exact and anelastic solutions. We found that the validity of the anelastic model is dictated by two conditions: first, the superadiabatic temperature difference must be small compared with the adiabatic temperature difference (as expected) ɛ = Δ TSA / delta Ta << 1, and secondly that the product of ɛ with the Nusselt number must be small.

  20. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    PubMed

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  1. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  2. Predictive validity of the structured assessment of violence risk in youth: A 4-year follow-up.

    PubMed

    Gammelgård, Monica; Koivisto, Anna-Maija; Eronen, Markku; Kaltiala-Heino, Riittakerttu

    2015-07-01

    Structured violence risk assessment is an essential part of treatment planning for violent young people. The Structured Assessment of Violence Risk in Youth (SAVRY) has been shown to have good reliability and validity in a range of settings but has hardly been studied in adolescent mental health services. This study aimed to evaluate the long-term predictive validity of the SAVRY in adolescent psychiatry settings. In a prospective study, 200 SAVRY assessments of adolescents were acquired from psychiatric, forensic and correctional settings. Re-offending records from the Finnish National Crime Register were collected. Receiver operating curve statistics were applied. High SAVRY total and individual subscale scores and low values on the protective factor subscale were significantly associated with subsequent adverse outcomes, but the predictive value of the total score was weak. At the risk item level, those indicating antisocial lifestyle, absence of social support and pro-social involvement were strong indicators of subsequent criminal convictions, with or without violence. The SAVRY summary risk rating was the best indicator of likelihood of being convicted of a violent crime. After allowing for sex, age, psychiatric diagnosis and treatment setting, for example, conviction for a violent crime was over nine times more likely among those young people given high SAVRY summary risk ratings. The SAVRY is a valid and useful method for assessing both short-term and long-term risks of violent and non-violent crime by young people in psychiatric as well as criminal justice settings, adding to a traditional risk-centred assessment approach by also indicating where future preventive treatment efforts should be targeted. The next steps should be to evaluate its role in everyday clinical practice when using the knowledge generated to inform and monitor management and treatment strategies. Copyright © 2014 John Wiley & Sons, Ltd.

  3. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  4. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Development and validation of a predictive score for perioperative transfusion in patients with hepatocellular carcinoma undergoing liver resection.

    PubMed

    Wang, Hai-Qing; Yang, Jian; Yang, Jia-Yin; Wang, Wen-Tao; Yan, Lu-Nan

    2015-08-01

    Liver resection is a major surgery requiring perioperative blood transfusion. Predicting the need for blood transfusion for patients undergoing liver resection is of great importance. The present study aimed to develop and validate a model for predicting transfusion requirement in HBV-related hepatocellular carcinoma patients undergoing liver resection. A total of 1543 consecutive liver resections were included in the study. Randomly selected sample set of 1080 cases (70% of the study cohort) were used to develop a predictive score for transfusion requirement and the remaining 30% (n=463) was used to validate the score. Based on the preoperative and predictable intraoperative parameters, logistic regression was used to identify risk factors and to create an integer score for the prediction of transfusion requirement. Extrahepatic procedure, major liver resection, hemoglobin level and platelets count were identified as independent predictors for transfusion requirement by logistic regression analysis. A score system integrating these 4 factors was stratified into three groups which could predict the risk of transfusion, with a rate of 11.4%, 24.7% and 57.4% for low, moderate and high risk, respectively. The prediction model appeared accurate with good discriminatory abilities, generating an area under the receiver operating characteristic curve of 0.736 in the development set and 0.709 in the validation set. We have developed and validated an integer-based risk score to predict perioperative transfusion for patients undergoing liver resection in a high-volume surgical center. This score allows identifying patients at a high risk and may alter transfusion practices.

  6. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  7. Generation of a complete set of human telomeric band painting probes by chromosome microdissection.

    PubMed

    Hu, Liang; Sham, Jonathan S T; Tjia, Wai Mui; Tan, Yue-qiu; Lu, Guang-xiu; Guan, Xin-Yuan

    2004-02-01

    Chromosomal rearrangements involving telomeric bands have been frequently detected in many malignancies and congenital diseases. To develop a useful tool to study chromosomal rearrangements within the telomeric band effectively and accurately, a whole set of telomeric band painting probes (TBP) has been generated by chromosome microdissection. The intensity and specificity of these TBPs have been tested by fluorescence in situ hybridization and all TBPs showed strong and specific signals to target regions. TBPs of 6q and 17p were successfully used to detect the loss of the terminal band of 6q in a hepatocellular carcinoma cell line and a complex translocation involving the 17p terminal band in a melanoma cell line. Meanwhile, the TBP of 21q was used to detect a de novo translocation, t(12;21), and the breakpoint at 21q was located at 21q22.2. Further application of these TBPs should greatly facilitate the cytogenetic analysis of complex chromosome rearrangements involving telomeric bands.

  8. Perceived functional ability assessed with the spinal function sort: is it valid for European rehabilitation settings in patients with non-specific non-acute low back pain?

    PubMed Central

    Hilfiker, R.; Kool, J. P.; Bachmann, S.; Hagen, K. B.

    2010-01-01

    The aim of this study involving 170 patients suffering from non-specific low back pain was to test the validity of the spinal function sort (SFS) in a European rehabilitation setting. The SFS, a picture-based questionnaire, assesses perceived functional ability of work tasks involving the spine. All measurements were taken by a blinded research assistant; work status was assessed with questionnaires. Our study demonstrated a high internal consistency shown by a Cronbach’s alpha of 0.98, reasonable evidence for unidimensionality, spearman correlations of >0.6 with work activities, and discriminating power for work status at 3 and 12 months by ROC curve analysis (area under curve = 0.760 (95% CI 0.689–0.822), respectively, 0.801 (95% CI 0.731–0.859). The standardised response mean within the two treatment groups was 0.18 and −0.31. As a result, we conclude that the perceived functional ability for work tasks can be validly assessed with the SFS in a European rehabilitation setting in patients with non-specific low back pain, and is predictive for future work status. PMID:20490874

  9. Validation of the Hungarian version of Carlson's Work-Family Conflict Scale.

    PubMed

    Ádám, Szilvia; Konkoly Thege, Barna

    2017-11-30

    Work-family conflict has been associated with adverse individual (e.g., cardiovascular diseases, anxiety disorders), organizational (e.g., absenteeism, lower productivity), and societal outcomes (e.g., increased use of healthcare services). However, lack of standardized measurement has hindered the comparison of data across various cultures. The purpose of this study was to develop the Hungarian version of Carlson et al.'s multidimensional Work-Family Conflict Scale and establish its reliability and validity. In a sample of 557 employees (145 men and 412 women), we conducted confirmatory factor analysis to investigate the factor structure and factorial invariance of the instrument across sex and data collection points and evaluated the tool's validity by assessing relationships between its dimensions and scales measuring general, marital, and job-related stress, depressive symptomatology, vital exhaustion, functional somatic symptoms, and social support. Our results showed that a six-factor model, similarly to that of the original instrument, fit the data best. Internal consistency of the six dimensions and the whole instrument was adequate. Convergent and divergent validity of the instrument and discriminant validity of the dimensions were also supported by our data. This study provides empirical support for the validity and reliability of the Hungarian version of the multidimensional Work-Family Conflict Scale. Deployment of this measure may allow for the generation of data that can be compared to those obtained in different cultural settings with the same instrument and hence advance our understanding of cross-cultural aspects of work-family conflict.

  10. Validation of a hydride generation atomic absorption spectrometry methodology for determination of mercury in fish designed for application in the Brazilian national residue control plan.

    PubMed

    Damin, Isabel C F; Santo, Maria A E; Hennigen, Rosmari; Vargas, Denise M

    2013-01-01

    In the present study, a method for the determination of mercury (Hg) in fish was validated according to ISO/IEC 17025, INMETRO (Brazil), and more recent European recommendations (Commission Decision 2007/333/EC and 2002/657/EC) for implementation in the Brazilian Residue Control Plan (NRCP) in routine applications. The parameters evaluated in the validation were investigated in detail. The results obtained for limit of detection and quantification were respectively, 2.36 and 7.88 μg kg(-1) of Hg. While the recovery varies between 90-96%. The coefficient of variation was of 4.06-8.94% for the repeatability. Furthermore, a comparison using an external proficiency testing scheme was realized. The results of method validated for the determination of the mercury in fish by Hydride generation atomic absorption spectrometry were considered suitable for implementation in routine analysis.

  11. Epidemiological cut-off values for Flavobacterium psychrophilum MIC data generated by a standard test protocol.

    PubMed

    Smith, P; Endris, R; Kronvall, G; Thomas, V; Verner-Jeffreys, D; Wilhelm, C; Dalsgaard, I

    2016-02-01

    Epidemiological cut-off values were developed for application to antibiotic susceptibility data for Flavobacterium psychrophilum generated by standard CLSI test protocols. The MIC values for ten antibiotic agents against Flavobacterium psychrophilum were determined in two laboratories. For five antibiotics, the data sets were of sufficient quality and quantity to allow the setting of valid epidemiological cut-off values. For these agents, the cut-off values, calculated by the application of the statistically based normalized resistance interpretation method, were ≤16 mg L(-1) for erythromycin, ≤2 mg L(-1) for florfenicol, ≤0.025 mg L(-1) for oxolinic acid (OXO), ≤0.125 mg L(-1) for oxytetracycline and ≤20 (1/19) mg L(-1) for trimethoprim/sulphamethoxazole. For ampicillin and amoxicillin, the majority of putative wild-type observations were 'off scale', and therefore, statistically valid cut-off values could not be calculated. For ormetoprim/sulphadimethoxine, the data were excessively diverse and a valid cut-off could not be determined. For flumequine, the putative wild-type data were extremely skewed, and for enrofloxacin, there was inadequate separation in the MIC values for putative wild-type and non-wild-type strains. It is argued that the adoption of OXO as a class representative for the quinolone group would be a valid method of determining susceptibilities to these agents. © 2014 John Wiley & Sons Ltd.

  12. Down-weighting overlapping genes improves gene set analysis

    PubMed Central

    2012-01-01

    Background The identification of gene sets that are significantly impacted in a given condition based on microarray data is a crucial step in current life science research. Most gene set analysis methods treat genes equally, regardless how specific they are to a given gene set. Results In this work we propose a new gene set analysis method that computes a gene set score as the mean of absolute values of weighted moderated gene t-scores. The gene weights are designed to emphasize the genes appearing in few gene sets, versus genes that appear in many gene sets. We demonstrate the usefulness of the method when analyzing gene sets that correspond to the KEGG pathways, and hence we called our method Pathway Analysis with Down-weighting of Overlapping Genes (PADOG). Unlike most gene set analysis methods which are validated through the analysis of 2-3 data sets followed by a human interpretation of the results, the validation employed here uses 24 different data sets and a completely objective assessment scheme that makes minimal assumptions and eliminates the need for possibly biased human assessments of the analysis results. Conclusions PADOG significantly improves gene set ranking and boosts sensitivity of analysis using information already available in the gene expression profiles and the collection of gene sets to be analyzed. The advantages of PADOG over other existing approaches are shown to be stable to changes in the database of gene sets to be analyzed. PADOG was implemented as an R package available at: http://bioinformaticsprb.med.wayne.edu/PADOG/or http://www.bioconductor.org. PMID:22713124

  13. Gene set analysis of purine and pyrimidine antimetabolites cancer therapies.

    PubMed

    Fridley, Brooke L; Batzler, Anthony; Li, Liang; Li, Fang; Matimba, Alice; Jenkins, Gregory D; Ji, Yuan; Wang, Liewei; Weinshilboum, Richard M

    2011-11-01

    Responses to therapies, either with regard to toxicities or efficacy, are expected to involve complex relationships of gene products within the same molecular pathway or functional gene set. Therefore, pathways or gene sets, as opposed to single genes, may better reflect the true underlying biology and may be more appropriate units for analysis of pharmacogenomic studies. Application of such methods to pharmacogenomic studies may enable the detection of more subtle effects of multiple genes in the same pathway that may be missed by assessing each gene individually. A gene set analysis of 3821 gene sets is presented assessing the association between basal messenger RNA expression and drug cytotoxicity using ethnically defined human lymphoblastoid cell lines for two classes of drugs: pyrimidines [gemcitabine (dFdC) and arabinoside] and purines [6-thioguanine and 6-mercaptopurine]. The gene set nucleoside-diphosphatase activity was found to be significantly associated with both dFdC and arabinoside, whereas gene set γ-aminobutyric acid catabolic process was associated with dFdC and 6-thioguanine. These gene sets were significantly associated with the phenotype even after adjusting for multiple testing. In addition, five associated gene sets were found in common between the pyrimidines and two gene sets for the purines (3',5'-cyclic-AMP phosphodiesterase activity and γ-aminobutyric acid catabolic process) with a P value of less than 0.0001. Functional validation was attempted with four genes each in gene sets for thiopurine and pyrimidine antimetabolites. All four genes selected from the pyrimidine gene sets (PSME3, CANT1, ENTPD6, ADRM1) were validated, but only one (PDE4D) was validated for the thiopurine gene sets. In summary, results from the gene set analysis of pyrimidine and purine therapies, used often in the treatment of various cancers, provide novel insight into the relationship between genomic variation and drug response.

  14. Validity and responsiveness of the pediatric quality of life inventory (PedsQL) 4.0 generic core scales in the pediatric inpatient setting.

    PubMed

    Desai, Arti D; Zhou, Chuan; Stanford, Susan; Haaland, Wren; Varni, James W; Mangione-Smith, Rita M

    2014-12-01

    Validated patient-reported outcomes responsive to clinical change are needed to evaluate the effectiveness of quality improvement interventions. To evaluate responsiveness, construct validity, and predictive validity of the Pediatric Quality of Life Inventory (PedsQL) 4.0 Generic Core Scales in the pediatric inpatient setting. Prospective, cohort study of parents and caregivers of patients 1 month to 18 years old (n = 4637) and patients 13 to 18 years old (n = 359) admitted to Seattle Children's Hospital between October 1, 2011, and December 31, 2013. Of 7184 eligible participants invited to complete the survey, 4637 (64.5%) completed the PedsQL on admission, and of these 2694 (58.1%) completed the follow-up survey 2 to 8 weeks after discharge. Responsiveness was assessed by calculating improvement scores (difference between follow-up and admission scores). Construct validity was examined by comparing the mean improvement scores for known groups differing by medical complexity. Predictive validity was assessed using Poisson regression to examine associations among admission scores, prolonged length of stay (≥3 days), and 30-day readmissions or emergency department (ED) return visits. Similar models examined the association between improvement scores and risk for 30-day readmissions or ED return visits. The mean (SD) PedsQL improvement scores (scale, 0-100) were 22.1 (22.7) for total, 29.4 (32.4) for physical, and 17.1 (21.0) for psychosocial. The mean PedsQL total improvement scores were lower for patients with medically complex conditions compared with patients without chronic conditions (13.7 [95% CI, 11.6-15.8] vs. 24.1 [95% CI, 22.4-25.7], P < .001). A 10-point decrement in the PedsQL total admission score below the established community-based mean was associated with an increase in risk for prolonged length of stay (15% [95% CI, 13%-17%]), 30-day readmissions (8% [95% CI, 3%-14%]), and ED return visits (13% [95% CI, 6%-20%]). A 5-point decrement

  15. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  16. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  17. Wind turbine/generator set and method of making same

    DOEpatents

    Bevington, Christopher M.; Bywaters, Garrett L.; Coleman, Clint C.; Costin, Daniel P.; Danforth, William L.; Lynch, Jonathan A.; Rolland, Robert H.

    2013-06-04

    A wind turbine comprising an electrical generator that includes a rotor assembly. A wind rotor that includes a wind rotor hub is directly coupled to the rotor assembly via a simplified connection. The wind rotor and generator rotor assembly are rotatably mounted on a central spindle via a bearing assembly. The wind rotor hub includes an opening having a diameter larger than the outside diameter of the central spindle adjacent the bearing assembly so as to allow access to the bearing assembly from a cavity inside the wind rotor hub. The spindle is attached to a turret supported by a tower. Each of the spindle, turret and tower has an interior cavity that permits personnel to traverse therethrough to the cavity of the wind rotor hub. The wind turbine further includes a frictional braking system for slowing, stopping or keeping stopped the rotation of the wind rotor and rotor assembly.

  18. The Parent Trauma Response Questionnaire (PTRQ): development and preliminary validation

    PubMed Central

    Creswell, Cathy; Dalgleish, Tim; Fearon, Pasco; Goodall, Ben; McKinnon, Anna; Smith, Patrick; Wright, Isobel

    2018-01-01

    ABSTRACT Background: Following a child’s experience of trauma, parental response is thought to play an important role in either facilitating or hindering their psychological adjustment. However, the ability to investigate the role of parenting responses in the post-trauma period has been hampered by a lack of valid and reliable measures. Objectives: The aim of this study was to design, and provide a preliminary validation of, the Parent Trauma Response Questionnaire (PTRQ), a self-report measure of parental appraisals and support for children’s coping, in the aftermath of child trauma. Methods: We administered an initial set of 78 items to 365 parents whose children, aged 2–19 years, had experienced a traumatic event. We conducted principal axis factoring and then assessed the validity of the reduced measure against a standardized general measure of parental overprotection and via the measure’s association with child post-trauma mental health. Results: Factor analysis generated three factors assessing parental maladaptive appraisals: (i) permanent change/damage, (ii) preoccupation with child’s vulnerability, and (iii) self-blame. In addition, five factors were identified that assess parental support for child coping: (i) behavioural avoidance, (ii) cognitive avoidance, (iii) overprotection, (iv) maintaining pre-trauma routines, and (v) approach coping. Good validity was evidenced against the measure of parental overprotection and child post-traumatic stress symptoms. Good test–retest reliability of the measure was also demonstrated. Conclusions: The PTRQ is a valid and reliable self-report assessment of parenting cognitions and coping in the aftermath of child trauma. PMID:29938010

  19. Study of Inter Annual and Intra Seasonal cycle of Rainfall using NOAA/INSAT OLR and validation of daily 3B42RT precipitation data sets across India and neighboring seas.

    NASA Astrophysics Data System (ADS)

    U. Bhanu Kumar, O. S. R.; Ramalingeswara Rao, S.

    In view of the thermally driving nature of tropical general circulation deep convection is a key parameter for highlighting the energy source that drives tropical atmospheric motion Regardless of their flaws in estimating deep convection the OLR can nevertheless offer reasonably good estimates for deep convection and rainfall in most tropical regions In the present study INSAT OLR datasets for 7-years 1993-1999 are used to examine the migration of heat sources and sinks over India and neighboring seas The locus of heating is associated with Indian monsoon system Since the motions are driven by gradients of heating and not the absolute magnitude of the sources and sinks themselves the heat sinks are integral parts of Indian monsoon systems Thus study of mean quantitative annual cycle of rainfall in terms of OLR is useful for farmer community and power generation industries over India Secondly anomaly pentad OLR data sets 1 r x1 r are used to examine onset withdrawal and break monsoon situations of summer monsoon season over India Next having identified active and inactive phases of intra seasonal oscillations during boreal summer and boreal winter using NOAA OLR for 25 years 1974-1999 their impact on monsoon systems and tropical cyclones over the Bay of Bengal are also investigated Finally available 3B42RT data sets which are real time multi satellite precipitation product 0 01 mm hr are validated with rain gauge data across India and island stations

  20. Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication

    PubMed Central

    Zamanzadeh, Vahid; Ghahramanian, Akram; Rassouli, Maryam; Abbaszadeh, Abbas; Alavi-Majd, Hamid; Nikanfar, Ali-Reza

    2015-01-01

    Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example. Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity. Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93. Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument. PMID:26161370

  1. Highly Reconfigurable Beamformer Stimulus Generator

    NASA Astrophysics Data System (ADS)

    Vaviļina, E.; Gaigals, G.

    2018-02-01

    The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources) and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.

  2. STOPP/START Medication Criteria Modified for US Nursing Home Setting

    PubMed Central

    Khodyakov, Dmitry; Ochoa, Aileen; Olivieri-Mui, Brianne L.; Bouwmeester, Carla; Zarowitz, Barbara J.; Patel, Meenakshi; Ching, Diana; Briesacher, Becky

    2016-01-01

    STRUCTURED ABSTRACT BACKGROUND/OBJECTIVES A barrier to assessing the quality of prescribing in nursing homes (NH) is the lack of explicit criteria for this setting. Our objective was to develop a set of prescribing indicators measurable with available data from electronic nursing home databases by adapting the European-based 2014 STOPP/START criteria of potentially inappropriate and underused medications for the US setting. DESIGN A two-stage expert panel process. In first stage, investigator team reviewed 114 criteria for compatibility and measurability. In second stage, we convened an online modified e-Delphi (OMD) panel to rate the validity of criteria and two webinars to identify criteria with highest relevance to US NHs. PARTICIPANTS Seventeen experts with recognized reputations in NH care participated in the e-Delphi panel and 12 in the webinar. MEASUREMENTS Compatibility and measurability were assessed by comparing criteria to US terminology/setting standards and data elements in NH databases. Validity was rated with a 9-point Likert-type scale (1=not valid at all, 9=highly valid). Mean, median, interpercentile ranges, and agreement were determined for each criterion score. Relevance was determined by ranking the mean panel ratings on criteria that reached agreement; half of the criteria with the highest mean values were reviewed and approved by the webinar participants. RESULTS Fifty-three STOPP/START criteria were deemed as compatible with US setting and measurable using data from electronic NH databases. E-Delphi panelists rated 48 criteria as valid for US NHs. Twenty-four criteria were deemed as most relevant, consisting of 22 measures of potentially inappropriate medications and 2 measures of underused medications. CONCLUSION This study created the first explicit criteria for assessing the quality of prescribing in US NHs. PMID:28008599

  3. DESCQA: Synthetic Sky Catalog Validation Framework

    NASA Astrophysics Data System (ADS)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  4. Validation of SAM 2 and SAGE satellite

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Wang, P.-H.; Farrukh, U. O.; Yue, G. K.

    1987-01-01

    Presented are the results of a validation study of data obtained by the Stratospheric Aerosol and Gas Experiment I (SAGE I) and Stratospheric Aerosol Measurement II (SAM II) satellite experiments. The study includes the entire SAGE I data set (February 1979 - November 1981) and the first four and one-half years of SAM II data (October 1978 - February 1983). These data sets have been validated by their use in the analysis of dynamical, physical and chemical processes in the stratosphere. They have been compared with other existing data sets and the SAGE I and SAM II data sets intercompared where possible. The study has shown the data to be of great value in the study of the climatological behavior of stratospheric aerosols and ozone. Several scientific publications and user-oriented data summaries have appeared as a result of the work carried out under this contract.

  5. Validation of the Hospital Ethical Climate Survey for older people care.

    PubMed

    Suhonen, Riitta; Stolt, Minna; Katajisto, Jouko; Charalambous, Andreas; Olson, Linda L

    2015-08-01

    The exploration of the ethical climate in the care settings for older people is highlighted in the literature, and it has been associated with various aspects of clinical practice and nurses' jobs. However, ethical climate is seldom studied in the older people care context. Valid, reliable, feasible measures are needed for the measurement of ethical climate. This study aimed to test the reliability, validity, and sensitivity of the Hospital Ethical Climate Survey in healthcare settings for older people. A non-experimental cross-sectional study design was employed, and a survey using questionnaires, including the Hospital Ethical Climate Survey was used for data collection. Data were analyzed using descriptive statistics, inferential statistics, and multivariable methods. Survey data were collected from a sample of nurses working in the care settings for older people in Finland (N = 1513, n = 874, response rate = 58%) in 2011. This study was conducted according to good scientific inquiry guidelines, and ethical approval was obtained from the university ethics committee. The mean score for the Hospital Ethical Climate Survey total was 3.85 (standard deviation = 0.56). Cronbach's alpha was 0.92. Principal component analysis provided evidence for factorial validity. LISREL provided evidence for construct validity based on goodness-of-fit statistics. Pearson's correlations of 0.68-0.90 were found between the sub-scales and the Hospital Ethical Climate Survey. The Hospital Ethical Climate Survey was found able to reveal discrimination across care settings and proved to be a valid and reliable tool for measuring ethical climate in care settings for older people and sensitive enough to reveal variations across various clinical settings. The Finnish version of the Hospital Ethical Climate Survey, used mainly in the hospital settings previously, proved to be a valid instrument to be used in the care settings for older people. Further studies are due to analyze the factor

  6. Lung Reference Set A Application: LaszloTakacs - Biosystems (2010) — EDRN Public Portal

    Cancer.gov

    We would like to access the NCI lung cancer Combined Pre-Validation Reference Set A in order to further validate a lung cancer diagnostic test candidate. Our test is based on a panel of antibodies which have been tested on 4 different cohorts (see below, paragraph “Preliminary Data and Methods”). This Reference Set A, whose clinical setting is “Diagnosis of lung cancer”, will be used to validate the panel of monoclonal antibodies which have been demonstrated by extensive data analysis to provide the best discrimination between controls and Lung Cancer patient plasma samples, sensitivity and specificity values from ROC analyses are superior than 85 %.

  7. Semi-automated ontology generation within OBO-Edit.

    PubMed

    Wächter, Thomas; Schroeder, Michael

    2010-06-15

    Ontologies and taxonomies have proven highly beneficial for biocuration. The Open Biomedical Ontology (OBO) Foundry alone lists over 90 ontologies mainly built with OBO-Edit. Creating and maintaining such ontologies is a labour-intensive, difficult, manual process. Automating parts of it is of great importance for the further development of ontologies and for biocuration. We have developed the Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG), a system which supports the creation and extension of OBO ontologies by semi-automatically generating terms, definitions and parent-child relations from text in PubMed, the web and PDF repositories. DOG4DAG is seamlessly integrated into OBO-Edit. It generates terms by identifying statistically significant noun phrases in text. For definitions and parent-child relations it employs pattern-based web searches. We systematically evaluate each generation step using manually validated benchmarks. The term generation leads to high-quality terms also found in manually created ontologies. Up to 78% of definitions are valid and up to 54% of child-ancestor relations can be retrieved. There is no other validated system that achieves comparable results. By combining the prediction of high-quality terms, definitions and parent-child relations with the ontology editor OBO-Edit we contribute a thoroughly validated tool for all OBO ontology engineers. DOG4DAG is available within OBO-Edit 2.1 at http://www.oboedit.org. Supplementary data are available at Bioinformatics online.

  8. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  9. The Utrecht questionnaire (U-CEP) measuring knowledge on clinical epidemiology proved to be valid.

    PubMed

    Kortekaas, Marlous F; Bartelink, Marie-Louise E L; de Groot, Esther; Korving, Helen; de Wit, Niek J; Grobbee, Diederick E; Hoes, Arno W

    2017-02-01

    Knowledge on clinical epidemiology is crucial to practice evidence-based medicine. We describe the development and validation of the Utrecht questionnaire on knowledge on Clinical epidemiology for Evidence-based Practice (U-CEP); an assessment tool to be used in the training of clinicians. The U-CEP was developed in two formats: two sets of 25 questions and a combined set of 50. The validation was performed among postgraduate general practice (GP) trainees, hospital trainees, GP supervisors, and experts. Internal consistency, internal reliability (item-total correlation), item discrimination index, item difficulty, content validity, construct validity, responsiveness, test-retest reliability, and feasibility were assessed. The questionnaire was externally validated. Internal consistency was good with a Cronbach alpha of 0.8. The median item-total correlation and mean item discrimination index were satisfactory. Both sets were perceived as relevant to clinical practice. Construct validity was good. Both sets were responsive but failed on test-retest reliability. One set took 24 minutes and the other 33 minutes to complete, on average. External GP trainees had comparable results. The U-CEP is a valid questionnaire to assess knowledge on clinical epidemiology, which is a prerequisite for practicing evidence-based medicine in daily clinical practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Modern modeling techniques had limited external validity in predicting mortality from traumatic brain injury.

    PubMed

    van der Ploeg, Tjeerd; Nieboer, Daan; Steyerberg, Ewout W

    2016-10-01

    Prediction of medical outcomes may potentially benefit from using modern statistical modeling techniques. We aimed to externally validate modeling strategies for prediction of 6-month mortality of patients suffering from traumatic brain injury (TBI) with predictor sets of increasing complexity. We analyzed individual patient data from 15 different studies including 11,026 TBI patients. We consecutively considered a core set of predictors (age, motor score, and pupillary reactivity), an extended set with computed tomography scan characteristics, and a further extension with two laboratory measurements (glucose and hemoglobin). With each of these sets, we predicted 6-month mortality using default settings with five statistical modeling techniques: logistic regression (LR), classification and regression trees, random forests (RFs), support vector machines (SVM) and neural nets. For external validation, a model developed on one of the 15 data sets was applied to each of the 14 remaining sets. This process was repeated 15 times for a total of 630 validations. The area under the receiver operating characteristic curve (AUC) was used to assess the discriminative ability of the models. For the most complex predictor set, the LR models performed best (median validated AUC value, 0.757), followed by RF and support vector machine models (median validated AUC value, 0.735 and 0.732, respectively). With each predictor set, the classification and regression trees models showed poor performance (median validated AUC value, <0.7). The variability in performance across the studies was smallest for the RF- and LR-based models (inter quartile range for validated AUC values from 0.07 to 0.10). In the area of predicting mortality from TBI, nonlinear and nonadditive effects are not pronounced enough to make modern prediction methods beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Mercury and Cyanide Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  12. Low/Medium Volatile Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity of analytical data generated through the US EPA Contract Laboratory Program Statement of Work ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  13. Evaluation and implementation of chemotherapy regimen validation in an electronic health record.

    PubMed

    Diaz, Amber H; Bubalo, Joseph S

    2014-12-01

    Computerized provider order entry of chemotherapy regimens is quickly becoming the standard for prescribing chemotherapy in both inpatient and ambulatory settings. One of the difficulties with implementation of chemotherapy regimen computerized provider order entry lies in verifying the accuracy and completeness of all regimens built in the system library. Our goal was to develop, implement, and evaluate a process for validating chemotherapy regimens in an electronic health record. We describe our experience developing and implementing a process for validating chemotherapy regimens in the setting of a standard, commercially available computerized provider order entry system. The pilot project focused on validating chemotherapy regimens in the adult inpatient oncology setting and adult ambulatory hematologic malignancy setting. A chemotherapy regimen validation process was defined as a result of the pilot project. Over a 27-week pilot period, 32 chemotherapy regimens were validated using the process we developed. Results of the study suggest that by validating chemotherapy regimens, the amount of time spent by pharmacists in daily chemotherapy review was decreased. In addition, the number of pharmacist modifications required to make regimens complete and accurate were decreased. Both physician and pharmacy disciplines showed improved satisfaction and confidence levels with chemotherapy regimens after implementation of the validation system. Chemotherapy regimen validation required a considerable amount of planning and time but resulted in increased pharmacist efficiency and improved provider confidence and satisfaction. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Complex fuzzy soft expert sets

    NASA Astrophysics Data System (ADS)

    Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak

    2017-04-01

    Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.

  15. A Transfer Voltage Simulation Method for Generator Step Up Transformers

    NASA Astrophysics Data System (ADS)

    Funabashi, Toshihisa; Sugimoto, Toshirou; Ueda, Toshiaki; Ametani, Akihiro

    It has been found from measurements for 13 sets of GSU transformers that a transfer voltage of a generator step-up (GSU) transformer involves one dominant oscillation frequency. The frequency can be estimated from the inductance and capacitance values of the GSU transformer low-voltage-side. This observation has led to a new method for simulating a GSU transformer transfer voltage. The method is based on the EMTP TRANSFORMER model, but stray capacitances are added. The leakage inductance and the magnetizing resistance are modified using approximate curves for their frequency characteristics determined from the measured results. The new method is validated in comparison with the measured results.

  16. An examination of three sets of MMPI-2 personality disorder scales.

    PubMed

    Jones, Alvin

    2005-08-01

    Three sets of personality disorder scales (PD scales) can be scored for the MMPI-2 (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989). Two sets (Levitt & Gotts, 1995; Morey, Waugh, & Blashfield, 1985) are derived from the MMPI (Hathaway & McKinley, 1983), and a third set (Somwaru & Ben-Porath, 1995) is based on the MMPI-2. There is no validity research for the Levitt and Gotts scale, and limited validity research is available for the Somwaru and Ben-Porath scales. There is a large body of research suggesting that the Morey et al. scales have good to excellent convergent validity when compared to a variety of other measures of personality disorders. Since the Morey et al. scales have established validity, there is a question if additional sets of PD scales are needed. The primary purpose of this research was to determine if the PD scales developed by Levitt and Gotts and those developed by Somwaru and Ben-Porath contribute incrementally to the scales developed by Morey et al. in predicting corresponding scales on the MCMI-II (Millon, 1987). In a sample of 494 individuals evaluated at an Army medical center, a hierarchical regression analysis demonstrated that the Somwaru and Ben-Porath Borderline, Antisocial, and Schizoid PD scales and the Levitt and Gotts Narcissistic and Histrionic scales contributed significantly and meaningfully to the Morey et al. scales in predicting the corresponding MCMI-II (Millon, 1987) scale. However, only the Somwaru and Ben-Porath scales demonstrated acceptable internal consistency and convergent validity.

  17. Measuring cervical cancer risk: development and validation of the CARE Risky Sexual Behavior Index.

    PubMed

    Reiter, Paul L; Katz, Mira L; Ferketich, Amy K; Ruffin, Mack T; Paskett, Electra D

    2009-12-01

    To develop and validate a risky sexual behavior index specific to cervical cancer research. Sexual behavior data on 428 women from the Community Awareness Resources and Education (CARE) study were utilized. A weighting scheme for eight risky sexual behaviors was generated and validated in creating the CARE Risky Sexual Behavior Index. Cutpoints were then identified to classify women as having a low, medium, or high level of risky sexual behavior. Index scores ranged from 0 to 35, with women considered to have a low level of risky sexual behavior if their score was less than six (31.3% of sample), a medium level if their score was 6–10 (30.6%), or a high level if their score was 11 or greater (38.1%). A strong association was observed between the created categories and having a previous abnormal Pap smear test (p < 0.001). The CARE Risky Sexual Behavior Index provides a tool for measuring risky sexual behavior level for cervical cancer research. Future studies are needed to validate this index in varied populations and test its use in the clinical setting.

  18. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    PubMed

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  19. Validation of Direct Normal Irradiance from Meteosat Second Generation

    NASA Astrophysics Data System (ADS)

    Meyer, Angela; Stöckli, Reto; Vuilleumier, Laurent; Wilbert, Stefan; Zarzalejo, Luis

    2016-04-01

    We present a validation study of Direct Normal Irradiance (DNI) derived from MSG/SEVIRI radiance measurements over the site of Plataforma Solar de Almeria (PSA), a solar power plant in Southern Spain. The 1 km x 1 km site of PSA hosts about a dozen pyrheliometers operated by the German Aerospace Centre (DLR) and the Centre for Energy, Environment and Technological Research (CIEMAT). They provide high-quality long-term measurements of surface DNI on a site of the scale of the MSG/SEVIRI pixel resolution. This makes the PSA DNI measurements a dataset particularly well suited for satellite validation purposes. The satellite-based surface DNI was retrieved from MSG/SEVIRI radiances by the HelioMont algorithm (Stöckli 2013) that forms part of the Heliosat algorithm family (e.g. Müller et al., 2004). We have assessed the accuracy of this DNI product for the PSA site by comparing with the in-situ measured DNIs of June 2014 - July 2015. Despite a generally good agreement, the HelioMont DNI exhibits a significant low bias at the PSA site, that is most pronounced during clear-sky periods. We present a bias correction method and discuss (1) the role of circumsolar diffuse radiation and (2) the role of climatological vs. reanalysis-based aerosol optical properties therein. We also characterize and assess the temporal variability of the HelioMont DNI as compared to the in situ measured DNIs, and will discuss and quantify the uncertainties in both DNI datasets.

  20. On-chip gradient generation in 256 microfluidic cell cultures: simulation and experimental validation.

    PubMed

    Somaweera, Himali; Haputhanthri, Shehan O; Ibraguimov, Akif; Pappas, Dimitri

    2015-08-07

    A microfluidic diffusion diluter was used to create a stable concentration gradient for dose response studies. The microfluidic diffusion diluter used in this study consisted of 128 culture chambers on each side of the main fluidic channel. A calibration method was used to find unknown concentrations with 12% error. Flow rate dependent studies showed that changing the flow rates generated different gradient patterns. Mathematical simulations using COMSOL Multi-physics were performed to validate the experimental data. The experimental data obtained for the flow rate studies agreed with the simulation results. Cells could be loaded into culture chambers using vacuum actuation and cultured for long times under low shear stress. Decreasing the size of the culture chambers resulted in faster gradient formation (20 min). Mass transport into the side channels of the microfluidic diffusion diluter used in this study is an important factor in creating the gradient using diffusional mixing as a function of the distance. To demonstrate the device's utility, an H2O2 gradient was generated while culturing Ramos cells. Cell viability was assayed in the 256 culture chambers, each at a discrete H2O2 concentration. As expected, the cell viability for the high concentration side channels increased (by injecting H2O2) whereas the cell viability in the low concentration side channels decreased along the chip due to diffusional mixing as a function of distance. COMSOL simulations were used to identify the effective concentration of H2O2 for cell viability in each side chamber at 45 min. The gradient effects were confirmed using traditional H2O2 culture experiments. Viability of cells in the microfluidic device under gradient conditions showed a linear relationship with the viability of the traditional culture experiment. Development of the microfluidic device used in this study could be used to study hundreds of concentrations of a compound in a single experiment.

  1. Validation of the Minority Stress Scale Among Italian Gay and Bisexual Men

    PubMed Central

    Pala, Andrea Norcini; Dell’Amore, Francesca; Steca, Patrizia; Clinton, Lauren; Sandfort, Theodorus; Rael, Christine

    2017-01-01

    The experience of sexual orientation stigma (e.g., homophobic discrimination and physical aggression) generates minority stress, a chronic form of psychosocial stress. Minority stress has been shown to have a negative effect on gay and bisexual men’s (GBM’s) mental and physical health, increasing the rates of depression, suicidal ideation, and HIV risk behaviors. In conservative religious settings, such as Italy, sexual orientation stigma can be more frequently and/or more intensively experienced. However, minority stress among Italian GBM remains understudied. The aim of this study was to explore the dimensionality, internal reliability, and convergent validity of the Minority Stress Scale (MSS), a comprehensive instrument designed to assess the manifestations of sexual orientation stigma. The MSS consists of 50 items assessing (a) Structural Stigma, (b) Enacted Stigma, (c) Expectations of Discrimination, (d) Sexual Orientation Concealment, (e) Internalized Homophobia Toward Others, (f) Internalized Homophobia toward Oneself, and (g) Stigma Awareness. We recruited an online sample of 451 Italian GBM to take the MSS. We tested convergent validity using the Perceived Stress Questionnaire. Through exploratory factor analysis, we extracted the 7 theoretical factors and an additional 3-item factor assessing Expectations of Discrimination From Family Members. The MSS factors showed good internal reliability (ordinal α > .81) and good convergent validity. Our scale can be suitable for applications in research settings, psychosocial interventions, and, potentially, in clinical practice. Future studies will be conducted to further investigate the properties of the MSS, exploring the association with additional health-related measures (e.g., depressive symptoms and anxiety). PMID:29479555

  2. Validation of the Minority Stress Scale Among Italian Gay and Bisexual Men.

    PubMed

    Pala, Andrea Norcini; Dell'Amore, Francesca; Steca, Patrizia; Clinton, Lauren; Sandfort, Theodorus; Rael, Christine

    2017-12-01

    The experience of sexual orientation stigma (e.g., homophobic discrimination and physical aggression) generates minority stress, a chronic form of psychosocial stress. Minority stress has been shown to have a negative effect on gay and bisexual men's (GBM's) mental and physical health, increasing the rates of depression, suicidal ideation, and HIV risk behaviors. In conservative religious settings, such as Italy, sexual orientation stigma can be more frequently and/or more intensively experienced. However, minority stress among Italian GBM remains understudied. The aim of this study was to explore the dimensionality, internal reliability, and convergent validity of the Minority Stress Scale (MSS), a comprehensive instrument designed to assess the manifestations of sexual orientation stigma. The MSS consists of 50 items assessing (a) Structural Stigma, (b) Enacted Stigma, (c) Expectations of Discrimination, (d) Sexual Orientation Concealment, (e) Internalized Homophobia Toward Others, (f) Internalized Homophobia toward Oneself, and (g) Stigma Awareness. We recruited an online sample of 451 Italian GBM to take the MSS. We tested convergent validity using the Perceived Stress Questionnaire. Through exploratory factor analysis, we extracted the 7 theoretical factors and an additional 3-item factor assessing Expectations of Discrimination From Family Members. The MSS factors showed good internal reliability (ordinal α > .81) and good convergent validity. Our scale can be suitable for applications in research settings, psychosocial interventions, and, potentially, in clinical practice. Future studies will be conducted to further investigate the properties of the MSS, exploring the association with additional health-related measures (e.g., depressive symptoms and anxiety).

  3. Mental State Assessment and Validation Using Personalized Physiological Biometrics

    PubMed Central

    Patel, Aashish N.; Howard, Michael D.; Roach, Shane M.; Jones, Aaron P.; Bryant, Natalie B.; Robinson, Charles S. H.; Clark, Vincent P.; Pilly, Praveen K.

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k-fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  4. Mental State Assessment and Validation Using Personalized Physiological Biometrics.

    PubMed

    Patel, Aashish N; Howard, Michael D; Roach, Shane M; Jones, Aaron P; Bryant, Natalie B; Robinson, Charles S H; Clark, Vincent P; Pilly, Praveen K

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k -fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  5. Valid randomization-based p-values for partially post hoc subgroup analyses.

    PubMed

    Lee, Joseph J; Rubin, Donald B

    2015-10-30

    By 'partially post-hoc' subgroup analyses, we mean analyses that compare existing data from a randomized experiment-from which a subgroup specification is derived-to new, subgroup-only experimental data. We describe a motivating example in which partially post hoc subgroup analyses instigated statistical debate about a medical device's efficacy. We clarify the source of such analyses' invalidity and then propose a randomization-based approach for generating valid posterior predictive p-values for such partially post hoc subgroups. Lastly, we investigate the approach's operating characteristics in a simple illustrative setting through a series of simulations, showing that it can have desirable properties under both null and alternative hypotheses. Copyright © 2015 John Wiley & Sons, Ltd.

  6. A Python tool to set up relative free energy calculations in GROMACS.

    PubMed

    Klimovich, Pavel V; Mobley, David L

    2015-11-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper (LOMAP; Liu et al. in J Comput Aided Mol Des 27(9):755-770, 2013), recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge (Mobley et al. in J Comput Aided Mol Des 28(4):135-150, 2014). Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations.

  7. Validation of urban freeway models.

    DOT National Transportation Integrated Search

    2015-01-01

    This report describes the methodology, data, conclusions, and enhanced models regarding the validation of two sets of models developed in the Strategic Highway Research Program 2 (SHRP 2) Reliability Project L03, Analytical Procedures for Determining...

  8. Validity and reliability of the Malay version of the Hill-Bone compliance to high blood pressure therapy scale for use in primary healthcare settings in Malaysia: A cross-sectional study.

    PubMed

    Cheong, A T; Tong, S F; Sazlina, S G

    2015-01-01

    Hill-Bone compliance to high blood pressure therapy scale (HBTS) is one of the useful scales in primary care settings. It has been tested in America, Africa and Turkey with variable validity and reliability. The aim of this paper was to determine the validity and reliability of the Malay version of HBTS (HBTS-M) for the Malaysian population. HBTS comprises three subscales assessing compliance to medication, appointment and salt intake. The content validity of HBTS to the local population was agreed through consensus of expert panel. The 14 items used in the HBTS were adapted to reflect the local situations. It was translated into Malay and then back-translated into English. The translated version was piloted in 30 participants. This was followed by structural and predictive validity, and internal consistency testing in 262 patients with hypertension, who were on antihypertensive agent(s) for at least 1 year in two primary healthcare clinics in Kuala Lumpur, Malaysia. Exploratory factor analyses and the correlation between HBTS-M total score and blood pressure were performed. The Cronbach's alpha was calculated accordingly. Factor analysis revealed a three-component structure represented by two components on medication adherence and one on salt intake adherence. The Kaiser-Meyer-Olkin statistic was 0.764. The variance explained by each factors were 23.6%, 10.4% and 9.8%, respectively. However, the internal consistency for each component was suboptimal with Cronbach's alpha of 0.64, 0.55 and 0.29, respectively. Although there were two components representing medication adherence, the theoretical concepts underlying each concept cannot be differentiated. In addition, there was no correlation between the HBTS-M total score and blood pressure. HBTS-M did not conform to the structural and predictive validity of the original scale. Its reliability on assessing medication and salt intake adherence would most probably to be suboptimal in the Malaysian primary care setting.

  9. An elaborate data set on human gait and the effect of mechanical perturbations

    PubMed Central

    Hnat, Sandra K.; van den Bogert, Antonie J.

    2015-01-01

    Here we share a rich gait data set collected from fifteen subjects walking at three speeds on an instrumented treadmill. Each trial consists of 120 s of normal walking and 480 s of walking while being longitudinally perturbed during each stance phase with pseudo-random fluctuations in the speed of the treadmill belt. A total of approximately 1.5 h of normal walking (>5000 gait cycles) and 6 h of perturbed walking (>20,000 gait cycles) is included in the data set. We provide full body marker trajectories and ground reaction loads in addition to a presentation of processed data that includes gait events, 2D joint angles, angular rates, and joint torques along with the open source software used for the computations. The protocol is described in detail and supported with additional elaborate meta data for each trial. This data can likely be useful for validating or generating mathematical models that are capable of simulating normal periodic gait and non-periodic, perturbed gaits. PMID:25945311

  10. Radiosynthesis of clinical doses of ⁶⁸Ga-DOTATATE (GalioMedix™) and validation of organic-matrix-based ⁶⁸Ge/⁶⁸Ga generators.

    PubMed

    Tworowska, Izabela; Ranganathan, David; Thamake, Sanjay; Delpassand, Ebrahim; Mojtahedi, Alireza; Schultz, Michael K; Zhernosekov, Konstantin; Marx, Sebastian

    2016-01-01

    68Ga-DOTATATE is a radiolabeled peptide-based agonist that targets somatostatin receptors overexpressed in neuroendocrine tumors. Here, we present our results on validation of organic matrix 68Ge/68Ga generators (ITG GmbH) applied for radiosynthesis of the clinical doses of 68Ga-DOTATATE (GalioMedixTM). The clinical grade of DOTATATE (25 μg±5 μg) compounded in 1 M NaOAc at pH=5.5 was labeled manually with 514±218 MBq (13.89±5.9 mCi) of 68Ga eluate in 0.05 N HCl at 95°C for 10 min. The radiochemical purity of the final dose was validated using radio-TLC. The quality control of clinical doses included tests of their osmolarity, endotoxin level, radionuclide identity, filter integrity, pH, sterility and 68Ge breakthrough. The final dose of 272±126 MBq (7.35±3.4 mCi) of 68Ga-DOTATATE was produced with a radiochemical yield (RCY) of 99%±1%. The total time required for completion of radiolabeling and quality control averaged approximately 35 min. This resulted in delivery of 50%±7% of 68Ga-DOTATATE at the time of calibration (not decay corrected). 68Ga eluted from the generator was directly applied for labeling of DOTA-peptide with no additional pre-concentration or pre-purification of isotope. The low acidity of 68Ga eluate allows for facile synthesis of clinical doses with radiochemical and radionuclide purity higher than 98% and average activity of 272±126 MBq (7.3±3 mCi). There is no need for post-labeling C18 Sep-Pak purification of final doses of radiotracer. Advances in knowledge and implications for patient care. The clinical interest in validation of 68Galabeled agents has increased in the past years due to availability of generators from different vendors (Eckert-Ziegler, ITG, iThemba), favorable approach of U.S. FDA agency to initiate clinical trials, and collaboration of U.S. centers with leading EU clinical sites. The list of 68Ga-labeled tracers evaluated in clinical studies should growth because of the sensitivity of PET technique, the

  11. Experimental validation of spatial Fourier transform-based multiple sound zone generation with a linear loudspeaker array.

    PubMed

    Okamoto, Takuma; Sakaguchi, Atsushi

    2017-03-01

    Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.

  12. Validation of a CFD Model by Using 3D Sonic Anemometers to Analyse the Air Velocity Generated by an Air-Assisted Sprayer Equipped with Two Axial Fans

    PubMed Central

    García-Ramos, F. Javier; Malón, Hugo; Aguirre, A. Javier; Boné, Antonio; Puyuelo, Javier; Vidal, Mariano

    2015-01-01

    A computational fluid dynamics (CFD) model of the air flow generated by an air-assisted sprayer equipped with two axial fans was developed and validated by practical experiments in the laboratory. The CFD model was developed by considering the total air flow supplied by the sprayer fan to be the main parameter, rather than the outlet air velocity. The model was developed for three air flows corresponding to three fan blade settings and assuming that the sprayer is stationary. Actual measurements of the air velocity near the sprayer were taken using 3D sonic anemometers. The workspace sprayer was divided into three sections, and the air velocity was measured in each section on both sides of the machine at a horizontal distance of 1.5, 2.5, and 3.5 m from the machine, and at heights of 1, 2, 3, and 4 m above the ground The coefficient of determination (R2) between the simulated and measured values was 0.859, which demonstrates a good correlation between the simulated and measured data. Considering the overall data, the air velocity values produced by the CFD model were not significantly different from the measured values. PMID:25621611

  13. Validation of a CFD model by using 3D sonic anemometers to analyse the air velocity generated by an air-assisted sprayer equipped with two axial fans.

    PubMed

    García-Ramos, F Javier; Malón, Hugo; Aguirre, A Javier; Boné, Antonio; Puyuelo, Javier; Vidal, Mariano

    2015-01-22

    A computational fluid dynamics (CFD) model of the air flow generated by an air-assisted sprayer equipped with two axial fans was developed and validated by practical experiments in the laboratory. The CFD model was developed by considering the total air flow supplied by the sprayer fan to be the main parameter, rather than the outlet air velocity. The model was developed for three air flows corresponding to three fan blade settings and assuming that the sprayer is stationary. Actual measurements of the air velocity near the sprayer were taken using 3D sonic anemometers. The workspace sprayer was divided into three sections, and the air velocity was measured in each section on both sides of the machine at a horizontal distance of 1.5, 2.5, and 3.5 m from the machine, and at heights of 1, 2, 3, and 4 m above the ground The coefficient of determination (R2) between the simulated and measured values was 0.859, which demonstrates a good correlation between the simulated and measured data. Considering the overall data, the air velocity values produced by the CFD model were not significantly different from the measured values.

  14. Validation of a social vulnerability index in context to river-floods in Germany

    NASA Astrophysics Data System (ADS)

    Fekete, A.

    2009-03-01

    Social vulnerability indices are a means for generating information about people potentially affected by disasters that are e.g. triggered by river-floods. The purpose behind such an index is in this study the development and the validation of a social vulnerability map of population characteristics towards river-floods covering all counties in Germany. This map is based on a composite index of three main indicators for social vulnerability in Germany - fragility, socio-economic conditions and region. These indicators have been identified by a factor analysis of selected demographic variables obtained from federal statistical offices. Therefore, these indicators can be updated annually based on a reliable data source. The vulnerability patterns detected by the factor analysis are verified by using an independent second data set. The interpretation of the second data set shows that vulnerability is revealed by a real extreme flood event and demonstrates that the patterns of the presumed vulnerability match the observations of a real event. It comprises a survey of flood-affected households in three federal states. By using logistic regression, it is demonstrated that the theoretically presumed indications of vulnerability are correct and that the indicators are valid. It is shown that indeed certain social groups like the elderly, the financially weak or the urban residents are higher risk groups.

  15. Psychometric evaluation of 3-set 4P questionnaire.

    PubMed

    Akerman, Eva; Fridlund, Bengt; Samuelson, Karin; Baigi, Amir; Ersson, Anders

    2013-02-01

    This is a further development of a specific questionnaire, the 3-set 4P, to be used for measuring former ICU patients' physical and psychosocial problems after intensive care and the need for follow-up. The aim was to psychometrically test and evaluate the 3-set 4P questionnaire in a larger population. The questionnaire consists of three sets: "physical", "psychosocial" and "follow-up". The questionnaires were sent by mail to all patients with more than 24-hour length of stay on four ICUs in Sweden. Construct validity was measured with exploratory factor analysis with Varimax rotation. This resulted in three factors for the "physical set", five factors for the "psychosocial set" and four factors for the "follow-up set" with strong factor loadings and a total explained variance of 62-77.5%. Thirteen questions in the SF-36 were used for concurrent validity showing Spearman's r(s) 0.3-0.6 in eight questions and less than 0.2 in five. Test-retest was used for stability reliability. In set follow-up the correlation was strong to moderate and in physical and psychosocial sets the correlations were moderate to fair. This may have been because the physical and psychosocial status changed rapidly during the test period. All three sets had good homogeneity. In conclusion, the 3-set 4P showed overall acceptable results, but it has to be further modified in different cultures before being considered a fully operational instrument for use in clinical practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. An algorithm to generate input data from meteorological and space shuttle observations to validate a CH4-CO model

    NASA Technical Reports Server (NTRS)

    Peters, L. K.; Yamanis, J.

    1981-01-01

    Objective procedures to analyze data from meteorological and space shuttle observations to validate a three dimensional model were investigated. The transport and chemistry of carbon monoxide and methane in the troposphere were studied. Four aspects were examined: (1) detailed evaluation of the variational calculus procedure, with the equation of continuity as a strong constraint, for adjustment of global tropospheric wind fields; (2) reduction of the National Meteorological Center (NMC) data tapes for data input to the OSTA-1/MAPS Experiment; (3) interpolation of the NMC Data for input to the CH4-CO model; and (4) temporal and spatial interpolation procedures of the CO measurements from the OSTA-1/MAPS Experiment to generate usable contours of the data.

  17. RosettaHoles: rapid assessment of protein core packing for structure prediction, refinement, design, and validation.

    PubMed

    Sheffler, Will; Baker, David

    2009-01-01

    We present a novel method called RosettaHoles for visual and quantitative assessment of underpacking in the protein core. RosettaHoles generates a set of spherical cavity balls that fill the empty volume between atoms in the protein interior. For visualization, the cavity balls are aggregated into contiguous overlapping clusters and small cavities are discarded, leaving an uncluttered representation of the unfilled regions of space in a structure. For quantitative analysis, the cavity ball data are used to estimate the probability of observing a given cavity in a high-resolution crystal structure. RosettaHoles provides excellent discrimination between real and computationally generated structures, is predictive of incorrect regions in models, identifies problematic structures in the Protein Data Bank, and promises to be a useful validation tool for newly solved experimental structures.

  18. RosettaHoles: Rapid assessment of protein core packing for structure prediction, refinement, design, and validation

    PubMed Central

    Sheffler, Will; Baker, David

    2009-01-01

    We present a novel method called RosettaHoles for visual and quantitative assessment of underpacking in the protein core. RosettaHoles generates a set of spherical cavity balls that fill the empty volume between atoms in the protein interior. For visualization, the cavity balls are aggregated into contiguous overlapping clusters and small cavities are discarded, leaving an uncluttered representation of the unfilled regions of space in a structure. For quantitative analysis, the cavity ball data are used to estimate the probability of observing a given cavity in a high-resolution crystal structure. RosettaHoles provides excellent discrimination between real and computationally generated structures, is predictive of incorrect regions in models, identifies problematic structures in the Protein Data Bank, and promises to be a useful validation tool for newly solved experimental structures. PMID:19177366

  19. Adolescent Personality: A Five-Factor Model Construct Validation

    ERIC Educational Resources Information Center

    Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…

  20. Validation of a second-generation multivariate index assay for malignancy risk of adnexal masses.

    PubMed

    Coleman, Robert L; Herzog, Thomas J; Chan, Daniel W; Munroe, Donald G; Pappas, Todd C; Smith, Alan; Zhang, Zhen; Wolf, Judith

    2016-07-01

    Women with adnexal mass suspected of ovarian malignancy are likely to benefit from consultation with a gynecologic oncologist, but imaging and biomarker tools to ensure this referral show low sensitivity and may miss cancer at critical stages. The multivariate index assay (MIA) was designed to improve the detection of ovarian cancer among women undergoing surgery for a pelvic mass. To improve the prediction of benign masses, we undertook the redesign and validation of a second-generation MIA (MIA2G). MIA2G was developed using banked serum samples from a previously published prospective, multisite registry of patients who underwent surgery to remove an adnexal mass. Clinical validity was then established using banked serum samples from the OVA500 trial, a second prospective cohort of adnexal surgery patients. Based on the final pathology results of the OVA500 trial, this intended-use population for MIA2G testing was high risk, with an observed cancer prevalence of 18.7% (92/493). Coded samples were assayed for MIA2G biomarkers by an external clinical laboratory. Then MIA2G results were calculated and submitted to a clinical statistics contract organization for decoding and comparison to MIA results for each subject. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated, among other measures, and stratified by menopausal status, stage, and histologic subtype. Three MIA markers (cancer antigen 125, transferrin, and apolipoprotein A-1) and 2 new biomarkers (follicle-stimulating hormone and human epididymis protein 4) were included in MIA2G. A single cut-off separated high and low risk of malignancy regardless of patient menopausal status, eliminating potential for confusion or error. MIA2G specificity (69%, 277/401 [n/N]; 95% confidence interval [CI], 64.4-73.4%) and PPV (40%, 84/208; 95% CI, 33.9-47.2%) were significantly improved over MIA (specificity, 54%, 215/401; 95% CI, 48.7-58.4%, and PPV, 31%, 85/271; 95